# Peeter Joot's Blog.

• ## Archives

 ivor on Just Energy Canada nasty busin… A final pre-exam upd… on An updated compilation of note… Anon on About peeterjoot on About Anon on About
• ## People not reading this blog: 6,973,738,433 minus:

• 132,595 hits

# Posts Tagged ‘geometric product’

## Plane wave solutions of Maxwell’s equation using Geometric Algebra

Posted by peeterjoot on September 3, 2012

# Motivation

Study of reflection and transmission of radiation in isotropic, charge and current free, linear matter utilizes the plane wave solutions to Maxwell’s equations. These have the structure of phasor equations, with some specific constraints on the components and the exponents.

These constraints are usually derived starting with the plain old vector form of Maxwell’s equations, and it is natural to wonder how this is done directly using Geometric Algebra. [1] provides one such derivation, using the covariant form of Maxwell’s equations. Here’s a slightly more pedestrian way of doing the same.

# Maxwell’s equations in media

We start with Maxwell’s equations for linear matter as found in [2]

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1a)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = -\frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.1b)

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1c)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.1d)

We merge these using the geometric identity

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{a} + I \boldsymbol{\nabla} \times \mathbf{a} = \boldsymbol{\nabla} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.2.2)

where $I$ is the 3D pseudoscalar $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, to find

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} = -I \frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.3a)

\begin{aligned}\boldsymbol{\nabla} \mathbf{B} = I \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.3b)

We want dimensions of $1/L$ for the derivative operator on the RHS of 1.2.3b, so we divide through by $\sqrt{\mu\epsilon} I$ for

\begin{aligned}-I \frac{1}{{\sqrt{\mu\epsilon}}} \boldsymbol{\nabla} \mathbf{B} = \sqrt{\mu\epsilon} \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.4)

This can now be added to 1.2.3a for

\begin{aligned}\left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

This is Maxwell’s equation in linear isotropic charge and current free matter in Geometric Algebra form.

# Phasor solutions

We write the electromagnetic field as

\begin{aligned}F = \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that for vacuum where $1/\sqrt{\mu \epsilon} = c$ we have the usual $F = \mathbf{E} + I c \mathbf{B}$. Assuming a phasor solution of

\begin{aligned}\tilde{F} = F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.3.7)

where $F_0$ is allowed to be complex, and the actual field is obtained by taking the real part

\begin{aligned}F = \text{Real} \tilde{F} = \text{Real}(F_0) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t)-\text{Imag}(F_0) \sin(\mathbf{k} \cdot \mathbf{x} - \omega t).\end{aligned} \hspace{\stretch{1}}(1.3.8)

Note carefully that we are using a scalar imaginary $i$, as well as the multivector (pseudoscalar) $I$, despite the fact that both have the square to scalar minus one property.

We now seek the constraints on $\mathbf{k}$, $\omega$, and $F_0$ that allow this to be a solution to 1.2.5

\begin{aligned}0 = \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

As usual in the non-geometric algebra treatment, we observe that any such solution $F$ to Maxwell’s equation is also a wave equation solution. In GA we can do so by right multiplying an operator that has a conjugate form,

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &= \left(\boldsymbol{\nabla} - \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \mu\epsilon \frac{\partial^2}{\partial t^2} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \frac{1}{{v^2}} \frac{\partial^2}{\partial t^2} \right) \tilde{F},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.10)

where $v = 1/\sqrt{\mu\epsilon}$ is the speed of the wave described by this solution.

Inserting the exponential form of our assumed solution 1.3.7 we find

\begin{aligned}0 = -(\mathbf{k}^2 - \omega^2/v^2) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)},\end{aligned} \hspace{\stretch{1}}(1.3.11)

which implies that the wave number vector $\mathbf{k}$ and the angular frequency $\omega$ are related by

\begin{aligned}v^2 \mathbf{k}^2 = \omega^2.\end{aligned} \hspace{\stretch{1}}(1.3.12)

Our assumed solution must also satisfy the first order system 1.3.9

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i\left(\mathbf{e}_m k_m - \frac{\omega}{v}\right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i k ( \hat{\mathbf{k}} - 1 ) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.13)

The constraints on $F_0$ must then be given by

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) F_0.\end{aligned} \hspace{\stretch{1}}(1.3.14)

With

\begin{aligned}F_0 = \mathbf{E}_0 + I v \mathbf{B}_0,\end{aligned} \hspace{\stretch{1}}(1.3.15)

we must then have all grades of the multivector equation equal to zero

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) \left(\mathbf{E}_0 + I v \mathbf{B}_0\right).\end{aligned} \hspace{\stretch{1}}(1.3.16)

Writing out all the geometric products, noting that $I$ commutes with all of $\hat{\mathbf{k}}$, $\mathbf{E}_0$, and $\mathbf{B}_0$ and employing the identity $\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}$ we have

\begin{aligned}\begin{array}{l l l l l}0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 & - \mathbf{E}_0 & + \hat{\mathbf{k}} \wedge \mathbf{E}_0 & I v \hat{\mathbf{k}} \cdot \mathbf{B}_0 \\ & & + I v \hat{\mathbf{k}} \wedge \mathbf{B}_0 & + I v \mathbf{B}_0 &\end{array}\end{aligned} \hspace{\stretch{1}}(1.3.17)

This is

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18a)

\begin{aligned}\mathbf{E}_0 =- \hat{\mathbf{k}} \times v \mathbf{B}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18b)

\begin{aligned}v \mathbf{B}_0 = \hat{\mathbf{k}} \times \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18c)

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{B}_0.\end{aligned} \hspace{\stretch{1}}(1.3.18d)

This and 1.3.12 describe all the constraints on our phasor that are required for it to be a solution. Note that only one of the two cross product equations in are required because the two are not independent. This can be shown by crossing $\hat{\mathbf{k}}$ with 1.3.18b and using the identity

\begin{aligned}\mathbf{a} \times (\mathbf{a} \times \mathbf{b}) = - \mathbf{a}^2 \mathbf{b} + a (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \hspace{\stretch{1}}(1.3.19)

One can find easily that 1.3.18b and 1.3.18c provide the same relationship between the $\mathbf{E}_0$ and $\mathbf{B}_0$ components of $F_0$. Writing out the complete expression for $F_0$ we have

\begin{aligned}\begin{aligned}F_0 &= \mathbf{E}_0 + I v \mathbf{B}_0 \\ &=\mathbf{E}_0 + I \hat{\mathbf{k}} \times \mathbf{E}_0 \\ &=\mathbf{E}_0 + \hat{\mathbf{k}} \wedge \mathbf{E}_0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.20)

Since $\hat{\mathbf{k}} \cdot \mathbf{E}_0 = 0$, this is

\begin{aligned}F_0 = (1 + \hat{\mathbf{k}}) \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(1.3.21)

Had we been clever enough this could have been deduced directly from the 1.3.14 directly, since we require a product that is killed by left multiplication with $\hat{\mathbf{k}} - 1$. Our complete plane wave solution to Maxwell’s equation is therefore given by

\begin{aligned}\begin{aligned}F &= \text{Real}(\tilde{F}) = \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \\ \tilde{F} &= (1 \pm \hat{\mathbf{k}}) \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} \mp \omega t)} \\ 0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 \\ \mathbf{k}^2 &= \omega^2 \mu \epsilon.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.22)

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

## Geometric Algebra. The very quickest introduction.

Posted by peeterjoot on March 17, 2012

# Motivation.

An attempt to make a relatively concise introduction to Geometric (or Clifford) Algebra. Much more complete introductions to the subject can be found in [1], [2], and [3].

# Axioms

We have a couple basic principles upon which the algebra is based

1. Vectors can be multiplied.
2. The square of a vector is the (squared) length of that vector (with appropriate generalizations for non-Euclidean metrics).
3. Vector products are associative (but not necessarily commutative).

That’s really all there is to it, and the rest, paraphrasing Feynman, can be figured out by anybody sufficiently clever.

# By example. The 2D case.

Consider a 2D Euclidean space, and the product of two vectors $\mathbf{a}$ and $\mathbf{b}$ in that space. Utilizing a standard orthonormal basis $\{\mathbf{e}_1, \mathbf{e}_2\}$ we can write

\begin{aligned}\mathbf{a} &= \mathbf{e}_1 x_1 + \mathbf{e}_2 x_2 \\ \mathbf{b} &= \mathbf{e}_1 y_1 + \mathbf{e}_2 y_2,\end{aligned} \hspace{\stretch{1}}(3.1)

and let’s write out the product of these two vectors $\mathbf{a} \mathbf{b}$, not yet knowing what we will end up with. That is

\begin{aligned}\mathbf{a} \mathbf{b} &= (\mathbf{e}_1 x_1 + \mathbf{e}_2 x_2 )( \mathbf{e}_1 y_1 + \mathbf{e}_2 y_2 ) \\ &= \mathbf{e}_1^2 x_1 y_1 + \mathbf{e}_2^2 x_2 y_2+ \mathbf{e}_1 \mathbf{e}_2 x_1 y_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 y_1\end{aligned}

From axiom 2 we have $\mathbf{e}_1^2 = \mathbf{e}_2^2 = 1$, so we have

\begin{aligned}\mathbf{a} \mathbf{b} = x_1 y_1 + x_2 y_2 + \mathbf{e}_1 \mathbf{e}_2 x_1 y_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 y_1.\end{aligned} \hspace{\stretch{1}}(3.3)

We’ve multiplied two vectors and ended up with a scalar component (and recognize that this part of the vector product is the dot product), and a component that is a “something else”. We’ll call this something else a bivector, and see that it is characterized by a product of non-colinear vectors. These products $\mathbf{e}_1 \mathbf{e}_2$ and $\mathbf{e}_2 \mathbf{e}_1$ are in fact related, and we can see that by looking at the case of $\mathbf{b} = \mathbf{a}$. For that we have

\begin{aligned}\mathbf{a}^2 &=x_1 x_1 + x_2 x_2 + \mathbf{e}_1 \mathbf{e}_2 x_1 x_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 x_1 \\ &={\left\lvert{\mathbf{a}}\right\rvert}^2 +x_1 x_2 ( \mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_2 \mathbf{e}_1 )\end{aligned}

Since axiom (2) requires our vectors square to equal its (squared) length, we must then have

\begin{aligned}\mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_2 \mathbf{e}_1 = 0,\end{aligned} \hspace{\stretch{1}}(3.4)

or

\begin{aligned}\mathbf{e}_2 \mathbf{e}_1 = -\mathbf{e}_1 \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(3.5)

We see that Euclidean orthonormal vectors anticommute. What we can see with some additional study is that any colinear vectors commute, and in Euclidean spaces (of any dimension) vectors that are normal to each other anticommute (this can also be taken as a definition of normal).

We can now return to our product of two vectors 3.3 and simplify it slightly

\begin{aligned}\mathbf{a} \mathbf{b} = x_1 y_1 + x_2 y_2 + \mathbf{e}_1 \mathbf{e}_2 (x_1 y_2 - x_2 y_1).\end{aligned} \hspace{\stretch{1}}(3.6)

The product of two vectors in 2D is seen here to have one scalar component, and one bivector component (an irreducible product of two normal vectors). Observe the symmetric and antisymmetric split of the scalar and bivector components above. This symmetry and antisymmetry can be made explicit, introducing dot and wedge product notation respectively

\begin{aligned}\mathbf{a} \cdot \mathbf{b} &= \frac{1}{{2}}( \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a}) = x_1 y_1 + x_2 y_2 \\ \mathbf{a} \wedge \mathbf{b} &= \frac{1}{{2}}( \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a}) = \mathbf{e}_1 \mathbf{e}_2 (x_1 y_y - x_2 y_1).\end{aligned} \hspace{\stretch{1}}(3.7)

so that the vector product can be written as

\begin{aligned}\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}.\end{aligned} \hspace{\stretch{1}}(3.9)

# Pseudoscalar

In many contexts it is useful to introduce an ordered product of all the unit vectors for the space is called the pseudoscalar. In our 2D case this is

\begin{aligned}i = \mathbf{e}_1 \mathbf{e}_2,\end{aligned} \hspace{\stretch{1}}(4.10)

a quantity that we find behaves like the complex imaginary. That can be shown by considering its square

\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2)^2&=(\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2) \\ &=\mathbf{e}_1 (\mathbf{e}_2 \mathbf{e}_1) \mathbf{e}_2 \\ &=-\mathbf{e}_1 (\mathbf{e}_1 \mathbf{e}_2) \mathbf{e}_2 \\ &=-(\mathbf{e}_1 \mathbf{e}_1) (\mathbf{e}_2 \mathbf{e}_2) \\ &=-1^2 \\ &= -1\end{aligned}

Here the anticommutation of normal vectors property has been used, as well as (for the first time) the associative multiplication axiom.

In a 3D context, you’ll see the pseudoscalar in many places (expressing the normals to planes for example). It also shows up in a number of fundamental relationships. For example, if one writes

\begin{aligned}I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(4.11)

for the 3D pseudoscalar, then it’s also possible to show

\begin{aligned}\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + I (\mathbf{a} \times \mathbf{b})\end{aligned} \hspace{\stretch{1}}(4.12)

something that will be familiar to the student of QM, where we see this in the context of Pauli matrices. The Pauli matrices also encode a Clifford algebraic structure, but we do not need an explicit matrix representation to do so.

# Rotations

Very much like complex numbers we can utilize exponentials to perform rotations. Rotating in a sense from $\mathbf{e}_1$ to $\mathbf{e}_2$, can be expressed as

\begin{aligned}\mathbf{a} e^{i \theta}&=(\mathbf{e}_1 x_1 + \mathbf{e}_2 x_2) (\cos\theta + \mathbf{e}_1 \mathbf{e}_2 \sin\theta) \\ &=\mathbf{e}_1 (x_1 \cos\theta - x_2 \sin\theta)+\mathbf{e}_2 (x_2 \cos\theta + x_1 \sin\theta)\end{aligned}

More generally, even in N dimensional Euclidean spaces, if $\mathbf{a}$ is a vector in a plane, and $\hat{\mathbf{u}}$ and $\hat{\mathbf{v}}$ are perpendicular unit vectors in that plane, then the rotation through angle $\theta$ is given by

\begin{aligned}\mathbf{a} \rightarrow \mathbf{a} e^{\hat{\mathbf{u}} \hat{\mathbf{v}} \theta}.\end{aligned} \hspace{\stretch{1}}(5.13)

This is illustrated in figure (1).

Plane rotation.

Notice that we have expressed the rotation here without utilizing a normal direction for the plane. The sense of the rotation is encoded by the bivector $\hat{\mathbf{u}} \hat{\mathbf{v}}$ that describes the plane and the orientation of the rotation (or by duality the direction of the normal in a 3D space). By avoiding a requirement to encode the rotation using a normal to the plane we have an method of expressing the rotation that works not only in 3D spaces, but also in 2D and greater than 3D spaces, something that isn’t possible when we restrict ourselves to traditional vector algebra (where quantities like the cross product can’t be defined in a 2D or 4D space, despite the fact that things they may represent, like torque are planar phenomena that do not have any intrinsic requirement for a normal that falls out of the plane.).

When $\mathbf{a}$ does not lie in the plane spanned by the vectors $\hat{\mathbf{u}}$ and $\hat{\mathbf{v}}$ , as in figure (2), we must express the rotations differently. A rotation then takes the form

\begin{aligned}\mathbf{a} \rightarrow e^{-\hat{\mathbf{u}} \hat{\mathbf{v}} \theta/2} \mathbf{a} e^{\hat{\mathbf{u}} \hat{\mathbf{v}} \theta/2}.\end{aligned} \hspace{\stretch{1}}(5.14)

3D rotation.

In the 2D case, and when the vector lies in the plane this reduces to the one sided complex exponential operator used above. We see these types of paired half angle rotations in QM, and they are also used extensively in computer graphics under the guise of quaternions.

# References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

## PHY450HS1: Relativistic electrodynamics: some exam reflection.

Posted by peeterjoot on April 28, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Charged particle in a circle.

From the 2008 PHY353 exam, given a particle of charge $q$ moving in a circle of radius $a$ at constant angular frequency $\omega$.

\begin{itemize}
\item Find the Lienard-Wiechert potentials for points on the z-axis.
\item Find the electric and magnetic fields at the center.
\end{itemize}

When I tried this I did it for points not just on the z-axis. It turns out that we also got this question on the exam (but stated slightly differently). Since I’ll not get to see my exam solution again, let’s work through this at a leisurely rate, and see if things look right. The problem as stated in this old practice exam is easier since it doesn’t say to calculate the fields from the four potentials, so there was nothing preventing one from just grinding away and plugging stuff into the Lienard-Wiechert equations for the fields (as I did when I tried it for practice).

## The potentials.

Let’s set up our coordinate system in cylindrical coordinates. For the charged particle and the point that we measure the field, with $i = \mathbf{e}_1 \mathbf{e}_2$

\begin{aligned}\mathbf{x}(t) &= a \mathbf{e}_1 e^{i \omega t} \\ \mathbf{r} &= z \mathbf{e}_3 + \rho \mathbf{e}_1 e^{i \phi}\end{aligned} \hspace{\stretch{1}}(1.1)

Here I’m using the geometric product of vectors (if that’s unfamiliar then just substitute

\begin{aligned}\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\} \rightarrow \{\sigma_1, \sigma_2, \sigma_3\}\end{aligned} \hspace{\stretch{1}}(1.3)

We can do that since the Pauli matrices also have the same semantics (with a small difference since the geometric square of a unit vector is defined as the unit scalar, whereas the Pauli matrix square is the identity matrix). The semantics we require of this vector product are just $\mathbf{e}_\alpha^2 = 1$ and $\mathbf{e}_\alpha \mathbf{e}_\beta = - \mathbf{e}_\beta \mathbf{e}_\alpha$ for any $\alpha \ne \beta$.

I’ll also be loose with notation and use $\text{Real}(X) = \left\langle{{X}}\right\rangle$ to select the scalar part of a multivector (or with the Pauli matrices, the portion proportional to the identity matrix).

Our task is to compute the Lienard-Wiechert potentials. Those are

\begin{aligned}A^0 &= \frac{q}{R^{*}} \\ \mathbf{A} &= A^0 \frac{\mathbf{v}}{c},\end{aligned} \hspace{\stretch{1}}(1.4)

where

\begin{aligned}\mathbf{R} &= \mathbf{r} - \mathbf{x}(t_r) \\ R = {\left\lvert{\mathbf{R}}\right\rvert} &= c (t - t_r) \\ R^{*} &= R - \frac{\mathbf{v}}{c} \cdot \mathbf{R} \\ \mathbf{v} &= \frac{d\mathbf{x}}{dt_r}.\end{aligned} \hspace{\stretch{1}}(1.6)

We’ll need (eventually)

\begin{aligned}\mathbf{v} &= a \omega \mathbf{e}_2 e^{i \omega t_r} = a \omega ( -\sin \omega t_r, \cos\omega t_r, 0) \\ \dot{\mathbf{v}} &= -a \omega^2 \mathbf{e}_1 e^{i \omega t_r} = -a \omega^2 (\cos\omega t_r, \sin\omega t_r, 0)\end{aligned} \hspace{\stretch{1}}(1.10)

and also need our retarded distance vector

\begin{aligned}\mathbf{R} = z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a e^{i \omega t_r} ),\end{aligned} \hspace{\stretch{1}}(1.12)

From this we have

\begin{aligned}R^2 &= z^2 + {\left\lvert{\mathbf{e}_1 (\rho e^{i \phi} - a e^{i \omega t_r} )}\right\rvert}^2 \\ &= z^2 + \rho^2 + a^2 - 2 \rho a (\mathbf{e}_1 \rho e^{i \phi}) \cdot (\mathbf{e}_1 e^{i \omega t_r}) \\ &= z^2 + \rho^2 + a^2 - 2 \rho a \text{Real}( e^{ i(\phi - \omega t_r) } ) \\ &= z^2 + \rho^2 + a^2 - 2 \rho a \cos(\phi - \omega t_r)\end{aligned}

So

\begin{aligned}R = \sqrt{z^2 + \rho^2 + a^2 - 2 \rho a \cos( \phi - \omega t_r ) }.\end{aligned} \hspace{\stretch{1}}(1.13)

Next we need

\begin{aligned}\mathbf{R} \cdot \mathbf{v}/c&= (z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a e^{i \omega t_r} )) \cdot \left(a \frac{\omega}{c} \mathbf{e}_2 e^{i \omega t_r} \right) \\ &=a \frac{\omega }{c}\text{Real}(i (\rho e^{-i \phi} - a e^{-i \omega t_r} ) e^{i \omega t_r} ) \\ &=a \frac{\omega }{c}\rho \text{Real}( i e^{-i \phi + i \omega t_r} ) \\ &=a \frac{\omega }{c}\rho \sin(\phi - \omega t_r)\end{aligned}

So we have

\begin{aligned}R^{*} = \sqrt{z^2 + \rho^2 + a^2 - 2 \rho a \cos( \phi - \omega t_r ) }-a \frac{\omega }{c} \rho \sin(\phi - \omega t_r)\end{aligned} \hspace{\stretch{1}}(1.14)

Writing $k = \omega/c$, and having a peek back at 1.4, our potentials are now solved for

\begin{aligned}\boxed{\begin{aligned}A^0 &= \frac{q}{\sqrt{z^2 + \rho^2 + a^2 - 2 \rho a \cos( \phi - k c t_r ) }} \\ \mathbf{A} &= A^0 a k ( -\sin k c t_r, \cos k c t_r, 0).\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.24)

The caveat is that $t_r$ is only specified implicitly, according to

\begin{aligned}\boxed{c t_r = c t - \sqrt{z^2 + \rho^2 + a^2 - 2 \rho a \cos( \phi - k c t_r ) }.}\end{aligned} \hspace{\stretch{1}}(1.16)

There doesn’t appear to be much hope of solving for $t_r$ explicitly in closed form.

## General fields for this system.

With

\begin{aligned}\mathbf{R}^{*} = \mathbf{R} - \frac{\mathbf{v}}{c} R,\end{aligned} \hspace{\stretch{1}}(1.17)

the fields are

\begin{aligned}\boxed{\begin{aligned}\mathbf{E} &= q (1 - \mathbf{v}^2/c^2) \frac{\mathbf{R}^{*}}{{R^{*}}^3} + \frac{q}{{R^{*}}^3} \mathbf{R} \times (\mathbf{R}^{*} \times \dot{\mathbf{v}}/c^2) \\ \mathbf{B} &= \frac{\mathbf{R}}{R} \times \mathbf{E}.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.18)

In there we have

\begin{aligned}1 - \mathbf{v}^2/c^2 = 1 - a^2 \frac{\omega^2}{c^2} = 1 - a^2 k^2\end{aligned} \hspace{\stretch{1}}(1.19)

and

\begin{aligned}\mathbf{R}^{*} &= z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a e^{i k c t_r} )-a k \mathbf{e}_2 e^{i k c t_r} R \\ &= z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a (1 - k R i) e^{i k c t_r} )\end{aligned}

Writing this out in coordinates isn’t particularly illuminating, but can be done for completeness without too much trouble

\begin{aligned}\mathbf{R}^{*} = ( \rho \cos\phi - a \cos t_r + a k R \sin t_r, \rho \sin\phi - a \sin t_r - a k R \cos t_r, z )\end{aligned} \hspace{\stretch{1}}(1.20)

In one sense the problem could be considered solved, since we have all the pieces of the puzzle. The outstanding question is whether or not the resulting mess can be simplified at all. Let’s see if the cross product reduces at all. Using

\begin{aligned}\mathbf{R} \times (\mathbf{R}^{*} \times \dot{\mathbf{v}}/c^2) =\mathbf{R}^{*} (\mathbf{R} \cdot \dot{\mathbf{v}}/c^2) - \frac{\dot{\mathbf{v}}}{c^2}(\mathbf{R} \cdot \mathbf{R}^{*})\end{aligned} \hspace{\stretch{1}}(1.21)

Perhaps one or more of these dot products can be simplified? One of them does reduce nicely

\begin{aligned}\mathbf{R}^{*} \cdot \mathbf{R} &= ( \mathbf{R} - R \mathbf{v}/c ) \cdot \mathbf{R} \\ &= R^2 - (\mathbf{R} \cdot \mathbf{v}/c) R \\ &= R^2 - R a k \rho \sin(\phi - k c t_r) \\ &= R(R - a k \rho \sin(\phi - k c t_r))\end{aligned}

\begin{aligned}\mathbf{R} \cdot \dot{\mathbf{v}}/c^2&=\Bigl(z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a e^{i \omega t_r} ) \Bigr) \cdot(-a k^2 \mathbf{e}_1 e^{i \omega t_r} ) \\ &=- a k^2 \left\langle{{\mathbf{e}_1 (\rho e^{i \phi} - a e^{i \omega t_r} ) \mathbf{e}_1 e^{i \omega t_r} ) }}\right\rangle \\ &=- a k^2 \left\langle{{(\rho e^{i \phi} - a e^{i \omega t_r} ) e^{-i \omega t_r} ) }}\right\rangle \\ &=- a k^2 \left\langle{{\rho e^{i \phi - i \omega t_r} - a }}\right\rangle \\ &=- a k^2 ( \rho \cos(\phi - k c t_r) - a )\end{aligned}

Putting this cross product back together we have

\begin{aligned}\mathbf{R} \times (\mathbf{R}^{*} \times \dot{\mathbf{v}}/c^2)&=a k^2 ( a -\rho \cos(\phi - k c t_r) ) \mathbf{R}^{*} +a k^2 \mathbf{e}_1 e^{i k c t_r} R(R - a k \rho \sin(\phi - k c t_r)) \\ &=a k^2 ( a -\rho \cos(\phi - k c t_r) ) \Bigl(z \mathbf{e}_3 + \mathbf{e}_1 (\rho e^{i \phi} - a (1 - k R i) e^{i k c t_r} )\Bigr) \\ &\qquad +a k^2 R \mathbf{e}_1 e^{i k c t_r} (R - a k \rho \sin(\phi - k c t_r)) \end{aligned}

Writing

\begin{aligned}\phi_r = \phi - k c t_r,\end{aligned} \hspace{\stretch{1}}(1.22)

this can be grouped into similar terms

\begin{aligned}\begin{aligned}\mathbf{R} \times (\mathbf{R}^{*} \times \dot{\mathbf{v}}/c^2)&=a k^2 (a - \rho \cos\phi_r) z \mathbf{e}_3 \\ &+ a k^2 \mathbf{e}_1(a - \rho \cos\phi_r) \rho e^{i\phi} \\ &+ a k^2 \mathbf{e}_1\left(-a (a - \rho \cos\phi_r) (1 - k R i)+ R(R - a k \rho \sin \phi_r)\right) e^{i k c t_r}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.23)

The electric field pieces can now be collected. Not expanding out the $R^{*}$ from 1.14, this is

\begin{aligned}\begin{aligned}\mathbf{E} &= \frac{q}{(R^{*})^3} z \mathbf{e}_3\Bigl( 1 - a \rho k^2 \cos\phi_r \Bigr) \\ &+\frac{q}{(R^{*})^3} \rho\mathbf{e}_1 \Bigl(1 - a \rho k^2 \cos\phi_r \Bigr) e^{i\phi} \\ &+\frac{q}{(R^{*})^3} a \mathbf{e}_1\left(-\Bigl( 1 + a k^2 (a - \rho \cos\phi_r) \Bigr) (1 - k R i)(1 - a^2 k^2)+ k^2 R(R - a k \rho \sin \phi_r)\right) e^{i k c t_r}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.24)

Along the z-axis where $\rho = 0$ what do we have?

\begin{aligned}R = \sqrt{z^2 + a^2 } \end{aligned} \hspace{\stretch{1}}(1.25)

\begin{aligned}A^0 = \frac{q}{R} \end{aligned} \hspace{\stretch{1}}(1.26)

\begin{aligned}\mathbf{A} = A^0 a k \mathbf{e}_2 e^{i k c t_r } \end{aligned} \hspace{\stretch{1}}(1.27)

\begin{aligned}c t_r = c t - \sqrt{z^2 + a^2 } \end{aligned} \hspace{\stretch{1}}(1.28)

\begin{aligned}\begin{aligned}\mathbf{E} &= \frac{q}{R^3} z \mathbf{e}_3 \\ &+\frac{q}{R^3} a \mathbf{e}_1\left(-( 1 - a^4 k^4 ) (1 - k R i)+ k^2 R^2 \right) e^{i k c t_r} \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.29)

\begin{aligned}\mathbf{B} = \frac{ z \mathbf{e}_3 - a \mathbf{e}_1 e^{i k c t_r}}{R} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(1.30)

The magnetic term here looks like it can be reduced a bit.

## An approximation near the center.

Unlike the old exam I did, where it didn’t specify that the potentials had to be used to calculate the fields, and the problem was reduced to one of algebraic manipulation, our exam explicitly asked for the potentials to be used to calculate the fields.

There was also the restriction to compute them near the center. Setting $\rho = 0$ so that we are looking only near the z-axis, we have

\begin{aligned}A^0 &= \frac{q}{\sqrt{z^2 + a^2}} \\ \mathbf{A} &= \frac{q a k \mathbf{e}_2 e^{i k c t_r} }{\sqrt{z^2 + a^2}} = \frac{q a k (-\sin k c t_r, \cos k c t_r, 0)}{\sqrt{z^2 + a^2}} \\ t_r &= t - R/c = t - \sqrt{z^2 + a^2}/c\end{aligned} \hspace{\stretch{1}}(1.31)

Now we are set to calculate the electric and magnetic fields directly from these. Observe that we have a spatial dependence in due to the $t_r$ quantities and that will have an effect when we operate with the gradient.

In the exam I’d asked Simon (our TA) if this question was asking for the fields at the origin (ie: in the plane of the charge’s motion in the center) or along the z-axis. He said in the plane. That would simplify things, but perhaps too much since $A^0$ becomes constant (in my exam attempt I somehow fudged this to get what I wanted for the $v = 0$ case, but that must have been wrong, and was the result of rushed work).

Let’s now proceed with the field calculation from these potentials

\begin{aligned}\mathbf{E} &= - \boldsymbol{\nabla} A^0 - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(1.34)

For the electric field we need

\begin{aligned}\boldsymbol{\nabla} A^0 &= q \mathbf{e}_3 \partial_z (z^2 + a^2)^{-1/2} \\ &= -q \mathbf{e}_3 \frac{z}{(\sqrt{z^2 + a^2})^3},\end{aligned}

and

\begin{aligned}\frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} =\frac{q a k^2 \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 e^{i k c t_r} }{\sqrt{z^2 + a^2}}.\end{aligned} \hspace{\stretch{1}}(1.36)

Putting these together, our electric field near the z-axis is

\begin{aligned}\mathbf{E} = q \mathbf{e}_3 \frac{z}{(\sqrt{z^2 + a^2})^3}+\frac{q a k^2 \mathbf{e}_1 e^{i k c t_r} }{\sqrt{z^2 + a^2}}.\end{aligned} \hspace{\stretch{1}}(1.37)

(another mistake I made on the exam, since I somehow fooled myself into forcing what I knew had to be in the gradient term, despite having essentially a constant scalar potential (having taken $z = 0$)).

What do we get for the magnetic field. In that case we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{A}(z)&=\mathbf{e}_\alpha \times \partial_\alpha \mathbf{A} \\ &=\mathbf{e}_3 \times \partial_z \frac{q a k \mathbf{e}_2 e^{i k c t_r} }{\sqrt{z^2 + a^2}} \\ &=\mathbf{e}_3 \times (\mathbf{e}_2 e^{i k c t_r} ) q a k \frac{\partial {}}{\partial {z}} \frac{1}{{\sqrt{z^2 + a^2}}} +q a k \frac{1}{{\sqrt{z^2 + a^2}}} \mathbf{e}_3 \times (\mathbf{e}_2 \partial_z e^{i k c t_r} ) \\ &=-\mathbf{e}_3 \times (\mathbf{e}_2 e^{i k c t_r} ) q a k \frac{z}{(\sqrt{z^2 + a^2})^3} +q a k \frac{1}{{\sqrt{z^2 + a^2}}} \mathbf{e}_3 \times \left( \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 k c e^{i k c t_r} \partial_z ( t - \sqrt{z^a + a^2}/c ) \right) \\ &=-\mathbf{e}_3 \times (\mathbf{e}_2 e^{i k c t_r} ) q a k \frac{z}{(\sqrt{z^2 + a^2})^3} -q a k^2 \frac{z}{z^2 + a^2} \mathbf{e}_3 \times \left( \mathbf{e}_1 k e^{i k c t_r} \right) \\ &=-\frac{q a k z \mathbf{e}_3}{z^2 + a^2} \times \left( \frac{ \mathbf{e}_2 e^{i k c t_r} }{\sqrt{z^2 + a^2}} + k \mathbf{e}_1 e^{i k c t_r} \right)\end{aligned}

For the direction vectors in the cross products above we have

\begin{aligned}\mathbf{e}_3 \times (\mathbf{e}_2 e^{i \mu})&=\mathbf{e}_3 \times (\mathbf{e}_2 \cos\mu - \mathbf{e}_1 \sin\mu) \\ &=-\mathbf{e}_1 \cos\mu - \mathbf{e}_2 \sin\mu \\ &=-\mathbf{e}_1 e^{i \mu}\end{aligned}

and

\begin{aligned}\mathbf{e}_3 \times (\mathbf{e}_1 e^{i \mu})&=\mathbf{e}_3 \times (\mathbf{e}_1 \cos\mu + \mathbf{e}_2 \sin\mu) \\ &=\mathbf{e}_2 \cos\mu - \mathbf{e}_1 \sin\mu \\ &=\mathbf{e}_2 e^{i \mu}\end{aligned}

Putting everything, and summarizing results for the fields, we have

\begin{aligned}\mathbf{E} &= q \mathbf{e}_3 \frac{z}{(\sqrt{z^2 + a^2})^3}+\frac{q a k^2 \mathbf{e}_1 e^{i \omega t_r} }{\sqrt{z^2 + a^2}} \\ \mathbf{B} &= \frac{q a k z}{ z^2 + a^2} \left( \frac{\mathbf{e}_1}{\sqrt{z^2 + a^2}} - k \mathbf{e}_2 \right) e^{i \omega t_r}\end{aligned} \hspace{\stretch{1}}(1.38)

The electric field expression above compares well to 1.29. We have the Coulomb term and the radiation term. It is harder to compare the magnetic field to the exact result 1.30 since I did not expand that out.

FIXME: A question to consider. If all this worked should we not also get

\begin{aligned}\mathbf{B} \stackrel{?}{=}\frac{z \mathbf{e}_3 - \mathbf{e}_1 a e^{i \omega t_r}}{\sqrt{z^2 + a^2}} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.40)

However, if I do this check I get

\begin{aligned}\mathbf{B} =\frac{q a z}{z^2 + a^2} \left( \frac{1}{{z^2 + a^2}} + k^2 \right) \mathbf{e}_2 e^{i \omega t_r}.\end{aligned} \hspace{\stretch{1}}(1.41)

# Collision of photon and electron.

I made a dumb error on the exam on this one. I setup the four momentum conservation statement, but then didn’t multiply out the cross terms properly. This led me to incorrectly assume that I had to try doing this the hard way (something akin to what I did on the midterm). Simon later told us in the tutorial the simple way, and that’s all we needed here too. Here’s the setup.

An electron at rest initially has four momentum

\begin{aligned}(m c, 0)\end{aligned} \hspace{\stretch{1}}(2.42)

where the incoming photon has four momentum

\begin{aligned}\left(\hbar \frac{\omega}{c}, \hbar \mathbf{k} \right)\end{aligned} \hspace{\stretch{1}}(2.43)

After the collision our electron has some velocity so its four momentum becomes (say)

\begin{aligned}\gamma (m c, m \mathbf{v}),\end{aligned} \hspace{\stretch{1}}(2.44)

and our new photon, going off on an angle $\theta$ relative to $\mathbf{k}$ has four momentum

\begin{aligned}\left(\hbar \frac{\omega'}{c}, \hbar \mathbf{k}' \right)\end{aligned} \hspace{\stretch{1}}(2.45)

Our conservation relationship is thus

\begin{aligned}(m c, 0) + \left(\hbar \frac{\omega}{c}, \hbar \mathbf{k} \right)=\gamma (m c, m \mathbf{v})+\left(\hbar \frac{\omega'}{c}, \hbar \mathbf{k}' \right)\end{aligned} \hspace{\stretch{1}}(2.46)

I squared both sides, but dropped my cross terms, which was just plain wrong, and costly for both time and effort on the exam. What I should have done was just

\begin{aligned}\gamma (m c, m \mathbf{v}) =(m c, 0) + \left(\hbar \frac{\omega}{c}, \hbar \mathbf{k} \right)-\left(\hbar \frac{\omega'}{c}, \hbar \mathbf{k}' \right),\end{aligned} \hspace{\stretch{1}}(2.47)

and then square this (really making contractions of the form $p_i p^i$). That gives (and this time keeping my cross terms)

\begin{aligned}(\gamma (m c, m \mathbf{v}) )^2 &= \gamma^2 m^2 (c^2 - \mathbf{v}^2) \\ &= m^2 c^2 \\ &=m^2 c^2 + 0 + 0+ 2 (m c, 0) \cdot \left(\hbar \frac{\omega}{c}, \hbar \mathbf{k} \right)- 2 (m c, 0) \cdot \left(\hbar \frac{\omega'}{c}, \hbar \mathbf{k}' \right)- 2 \cdot \left(\hbar \frac{\omega}{c}, \hbar \mathbf{k} \right)\cdot \left(\hbar \frac{\omega'}{c}, \hbar \mathbf{k}' \right) \\ &=m^2 c^2 + 2 m c \hbar \frac{\omega}{c} - 2 m c \hbar \frac{\omega'}{c}- 2\hbar^2 \left(\frac{\omega}{c} \frac{\omega'}{c}- \mathbf{k} \cdot \mathbf{k}'\right) \\ &=m^2 c^2 + 2 m c \hbar \frac{\omega}{c} - 2 m c \hbar \frac{\omega'}{c}- 2\hbar^2 \frac{\omega}{c} \frac{\omega'}{c} (1 - \cos\theta)\end{aligned}

Rearranging a bit we have

\begin{aligned}\omega' \left( m + \frac{\hbar \omega}{c^2} ( 1 - \cos\theta ) \right) = m \omega,\end{aligned} \hspace{\stretch{1}}(2.48)

or

\begin{aligned}\omega' = \frac{\omega}{1 + \frac{\hbar \omega}{m c^2} ( 1 - \cos\theta ) }\end{aligned} \hspace{\stretch{1}}(2.49)

# Pion decay.

The problem above is very much like a midterm problem we had, so there was no justifiable excuse for messing up on it. That midterm problem was to consider the split of a pion at rest into a neutrino (massless) and a muon, and to calculate the energy of the muon. That one also follows the same pattern, a calculation of four momentum conservation, say

\begin{aligned}(m_\pi c, 0) = \hbar \frac{\omega}{c}(1, \hat{\mathbf{k}}) + ( \mathcal{E}_\mu/c, \mathbf{p}_\mu ).\end{aligned} \hspace{\stretch{1}}(3.50)

Here $\omega$ is the frequency of the massless neutrino. The massless nature is encoded by a four momentum that squares to zero, which follows from $(1, \hat{\mathbf{k}}) \cdot (1, \hat{\mathbf{k}}) = 1^2 - \hat{\mathbf{k}} \cdot \hat{\mathbf{k}} = 0$.

When I did this problem on the midterm, I perversely put in a scattering angle, instead of recognizing that the particles must scatter at 180 degree directions since spatial momentum components must also be preserved. This and the combination of trying to work in spatial quantities led to a mess and I didn’t get the end result in anything that could be considered tidy.

The simple way to do this is to just rearrange to put the null vector on one side, and then square. This gives us

\begin{aligned}0 &=\left(\hbar \frac{\omega}{c}(1, \hat{\mathbf{k}}) \right) \cdot\left(\hbar \frac{\omega}{c}(1, \hat{\mathbf{k}}) \right) \\ &=\left( (m_\pi c, 0) - ( \mathcal{E}_\mu/c, \mathbf{p}_\mu ) \right) \cdot \left( (m_\pi c, 0) - ( \mathcal{E}_\mu/c, \mathbf{p}_\mu ) \right) \\ &={m_\pi}^2 c^2 + {m_\nu}^2 c^2 - 2 (m_\pi c, 0) \cdot ( \mathcal{E}_\mu/c, \mathbf{p}_\mu ) \\ &={m_\pi}^2 c^2 + {m_\nu}^2 c^2 - 2 m_\pi \mathcal{E}_\mu\end{aligned}

A final re-arrangement gives us the muon energy

\begin{aligned}\mathcal{E}_\mu = \frac{1}{{2}} \frac{ {m_\pi}^2 + {m_\nu}^2 }{m_\pi} c^2\end{aligned} \hspace{\stretch{1}}(3.51)

## Vector form of Julia fractal

Posted by peeterjoot on December 27, 2010

# Motivation.

As outlined in [1], 2-D and N-D Julia fractals can be computed using the geometric product, instead of complex numbers. Explore a couple of details related to that here.

# Guts

Fractal patterns like the mandelbrot and julia sets are typically using iterative computations in the complex plane. For the Julia set, our iteration has the form

\begin{aligned}Z \rightarrow Z^p + C\end{aligned} \hspace{\stretch{1}}(2.1)

where $p$ is an integer constant, and $Z$, and $C$ are complex numbers. For $p=2$ I believe we obtain the Mandelbrot set. Given the isomorphism between complex numbers and vectors using the geometric product, we can use write

\begin{aligned}Z &= \mathbf{x} \hat{\mathbf{n}} \\ C &= \mathbf{c} \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.2)

and reexpress the Julia iterator as

\begin{aligned}\mathbf{x} \rightarrow (\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} + \mathbf{c}\end{aligned} \hspace{\stretch{1}}(2.4)

It’s not obvious that the RHS of this equation is a vector and not a multivector, especially when the vector $\mathbf{x}$ lies in $\mathbb{R}^{3}$ or higher dimensional space. To get a feel for this, let’s start by write this out in components for $\hat{\mathbf{n}} = \mathbf{e}_1$ and $p=2$. We obtain for the product term

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \hat{\mathbf{n}} \hat{\mathbf{n}} \\ &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \\ &= (x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 )\mathbf{e}_1(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 ) \\ &= (x_1 + x_2 \mathbf{e}_2 \mathbf{e}_1 )(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 ) \\ &= (x_1^2 - x_2^2 ) \mathbf{e}_1 + 2 x_1 x_2 \mathbf{e}_2\end{aligned}

Looking at the same square in coordinate representation for the $\mathbb{R}^{n}$ case (using summation notation unless otherwise specified), we have

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &= x_k \mathbf{e}_k \mathbf{e}_1x_m \mathbf{e}_m \\ &= \left(x_1 + \sum_{k>1} x_k \mathbf{e}_k \mathbf{e}_1\right)x_m \mathbf{e}_m \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_1 \mathbf{e}_k +\sum_{k>1,m>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= \left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k +\sum_{1 < k < m, 1 < m < k} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ \end{aligned}

This last term is zero since $\mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m = -\mathbf{e}_m \mathbf{e}_1 \mathbf{e}_k$, and we are left with

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} =\left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(2.5)

a vector, even for non-planar vectors. How about for an arbitrary orientation of the unit vector in $\mathbb{R}^{n}$? For that we get

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &=(\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} ) \hat{\mathbf{n}} \mathbf{x} \\ &=(\mathbf{x} \cdot \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} ) (\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} ) \\ &=((\mathbf{x} \cdot \hat{\mathbf{n}})^2 + (\mathbf{x} \wedge \hat{\mathbf{n}})^2) \hat{\mathbf{n}}+ 2 (\mathbf{x} \cdot \hat{\mathbf{n}}) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}\end{aligned}

We can read 2.5 off of this result by inspection for the $\hat{\mathbf{n}} = \mathbf{e}_1$ case.

It is now straightforward to show that the product $(\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}}$ is a vector for integer $p \ge 2$. We’ve covered the $p=2$ case, justifying an assumption that this product has the following form

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p-1} \hat{\mathbf{n}} = a \hat{\mathbf{n}} + b (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.6)

for scalars $a$ and $b$. The induction test becomes

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p} \hat{\mathbf{n}} &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} (\mathbf{x} \hat{\mathbf{n}}) \hat{\mathbf{n}} \\ &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} \mathbf{x} \\ &= (a + b (\mathbf{x} \wedge \hat{\mathbf{n}}) ) ((\mathbf{x} \cdot \hat{\mathbf{n}} )\hat{\mathbf{n}} + (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}) \\ &= ( a(\mathbf{x} \cdot \hat{\mathbf{n}} )^2 - b (\mathbf{x} \wedge \hat{\mathbf{n}})^2 ) \hat{\mathbf{n}}+ ( a + b(\mathbf{x} \cdot \hat{\mathbf{n}} ) ) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}.\end{aligned}

Again we have a vector split nicely into projective and rejective components, so for any integer power of $p$ our iterator 2.4 employing the geometric product is a mapping from vectors to vectors.

There is a striking image in the text of such a Julia set for such a 3D iterator, and an exersize left for the adventurous reader to attempt to code that based on the 2D $p=2$ sample code they provide.

# References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.