# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 290,315

## A cylindrical Lienard-Wiechert potential calculation using multivector matrix products.

Posted by peeterjoot on May 1, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

A while ago I worked the problem of determining the equations of motion for a chain like object [1].
This was idealized as a set of $N$ interconnected spherical pendulums. One of the aspects of that problem that I found fun was that it allowed me to use a new construct, factoring vectors into multivector matrix products, multiplied using the Geometric (Clifford) product. It seemed at the time that this made the problem tractable, whereas a traditional formulation was much less so. Later I realized that a very similar factorization was possible with matrices directly [2]. This was a bit disappointing since I was enamored by my new calculation tool, and realized that the problem could be tackled with much less learning cost if the same factorization technique was applied using plain old matrices.

I’ve now encountered a new use for this idea of factoring a vector into a product of multivector matrices. Namely, a calculation of the four vector Lienard-Wiechert potentials, given a general motion described in cylindrical coordinates. This I thought I’d try since we had a similar problem on our exam (with the motion of the charged particle additionally constrained to a circle).

# The goal of the calculation.

Our problem is to calculate

\begin{aligned}A^0 &= \frac{q}{R^{*}} \\ \mathbf{A} &= \frac{q \mathbf{v}_c}{c R^{*}}\end{aligned} \hspace{\stretch{1}}(2.1)

where $\mathbf{x}_c(t)$ is the location of the charged particle, $\mathbf{r}$ is the point that the field is measured, and

\begin{aligned}R^{*} &= R - \frac{\mathbf{v}_c}{c} \cdot \mathbf{R} \\ R^2 &= \mathbf{R}^2 = c^2( t - t_r)^2 \\ \mathbf{R} &= \mathbf{r} - \mathbf{x}_c(t_r) \\ \mathbf{v}_c &= \frac{\partial {\mathbf{x}_c}}{\partial {t_r}}.\end{aligned} \hspace{\stretch{1}}(2.3)

# Calculating the potentials for an arbitrary cylindrical motion.

Suppose that our charged particle has the trajectory

\begin{aligned}\mathbf{x}_c(t) = h(t) \mathbf{e}_3 + a(t) \mathbf{e}_1 e^{i \theta(t)}\end{aligned} \hspace{\stretch{1}}(3.7)

where $i = \mathbf{e}_1 \mathbf{e}_2$, and we measure the field at the point

\begin{aligned}\mathbf{r} = z \mathbf{e}_3 + \rho \mathbf{e}_1 e^{i \phi}\end{aligned} \hspace{\stretch{1}}(3.8)

The vector separation between the two is

\begin{aligned}\mathbf{R} &= \mathbf{r} - \mathbf{x}_c \\ &= (z - h) \mathbf{e}_3 + \mathbf{e}_1 ( \rho e^{i\phi} - a e^{i\theta} ) \\ &=\begin{bmatrix}\mathbf{e}_1 e^{i\phi} & - \mathbf{e}_1 e^{i\theta} & \mathbf{e}_3\end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix}\end{aligned}

Transposition does not change this at all, so the (squared) length of this vector difference is

\begin{aligned}\mathbf{R}^2 &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}\mathbf{e}_1 e^{i\phi} \\ - \mathbf{e}_1 e^{i\theta} \\ \mathbf{e}_3\end{bmatrix}\begin{bmatrix}\mathbf{e}_1 e^{i\phi} & - \mathbf{e}_1 e^{i\theta} & \mathbf{e}_3\end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}\mathbf{e}_1 e^{i\phi} \mathbf{e}_1 e^{i\phi} & - \mathbf{e}_1 e^{i\phi} \mathbf{e}_1 e^{i\theta} & \mathbf{e}_1 e^{i\phi} \mathbf{e}_3 \\ - \mathbf{e}_1 e^{i\theta} \mathbf{e}_1 e^{i\phi} & \mathbf{e}_1 e^{i\theta} \mathbf{e}_1 e^{i\theta} & - \mathbf{e}_1 e^{i\theta} \mathbf{e}_3 \\ \mathbf{e}_3 \mathbf{e}_1 e^{i\phi} & -\mathbf{e}_3 \mathbf{e}_1 e^{i\theta} & \mathbf{e}_3 \mathbf{e}_3 \\ \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}1 & - e^{i(\theta-\phi)} & \mathbf{e}_1 e^{i\phi} \mathbf{e}_3 \\ - e^{i(\phi -\theta)} & 1 & - \mathbf{e}_1 e^{i\theta} \mathbf{e}_3 \\ \mathbf{e}_3 \mathbf{e}_1 e^{i\phi} & -\mathbf{e}_3 \mathbf{e}_1 e^{i\theta} & 1 \\ \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ \end{aligned}

## A motivation for a Hermitian like transposition operation.

There are a few things of note about this matrix. One of which is that it is not symmetric. This is a consequence of the non-commutative nature of the vector products. What we do have is a Hermitian transpose like symmetry. Observe that terms like the $(1,2)$ and the $(2,1)$ elements of the matrix are equal after all the vector products are reversed.

Using tilde to denote this reversion, we have

\begin{aligned}(e^{i (\theta - \phi)})^{\tilde{}}&=\cos(\theta - \phi)+ (\mathbf{e}_1 \mathbf{e}_2)^{\tilde{}}\sin(\theta - \phi) \\ &=\cos(\theta - \phi)+ \mathbf{e}_2 \mathbf{e}_1\sin(\theta - \phi) \\ &=\cos(\theta - \phi)- \mathbf{e}_1 \mathbf{e}_2 \sin(\theta - \phi) \\ &=e^{-i (\theta -\phi)}.\end{aligned}

The fact that all the elements of this matrix, if non-scalar, have their reversed value in the transposed position, is sufficient to show that the end result is a scalar as expected. Consider a general quadratic form where the matrix has scalar and bivector grades as above, where there is reversion in all the transposed positions. That is

\begin{aligned}b^\text{T} A b\end{aligned} \hspace{\stretch{1}}(3.9)

where $A = {\left\lVert{A_{ij}}\right\rVert}$, a $m \times m$ matrix where $A_{ij} = \tilde{A_{ji}}$ and contains scalar and bivector grades, and $b = {\left\lVert{b_i}\right\rVert}$, a $m\times 1$ column matrix of scalars. Then the product is

\begin{aligned}\sum_{ij} b_i A_{ij} b_j&=\sum_{i

The quantity in braces $A_{ij} + \tilde{A_{ij}}$ is a scalar since any of the bivector grades in $A_{ij}$ cancel out. Consider a similar general product of a vector after the vector has been factored into a product of matrices of multivector elements

\begin{aligned}\mathbf{x} = \begin{bmatrix}a_1 & a_2 & \hdots & a_m\end{bmatrix}\begin{bmatrix}b_1 \\ b_2 \\ \dot{v}s \\ b_m\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.10)

The (squared) length of the vector is

\begin{aligned}\mathbf{x}^2 &= (a_i b_i) (a_j b_j) \\ &= (a_i b_i)^{\tilde{}} a_j b_j \\ &= \tilde{b_i} \tilde{a_i} a_j b_j \\ &= \tilde{b_i} (\tilde{a_i} a_j) b_j.\end{aligned}

It is clear that we want a transposition operation that includes reversal of its elements, so with a general factorization of a vector into matrices of multivectors $\mathbf{x} = A b$, it’s square will be $\mathbf{x} = {\tilde{b}}^\text{T} {\tilde{A}}^\text{T} A b$.

As with purely complex valued matrices, it is convenient to use the dagger notation, and define

\begin{aligned}A^\dagger = \tilde{A}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.11)

where $\tilde{A}$ contains the reversed elements of $A$. By extension, we can define dot and wedge products of vectors expressed as products of multivector matrices. Given $\mathbf{x} = A b$, a row vector and column vector product, and $\mathbf{y} = C d$, where each of the rows or columns has $m$ elements, the dot and wedge products are

\begin{aligned}\mathbf{x} \cdot \mathbf{y} &= \left\langle{{ d^\dagger C^\dagger A b }}\right\rangle \\ \mathbf{x} \wedge \mathbf{y} &= {\left\langle{{ d^\dagger C^\dagger A b }}\right\rangle}_{2}.\end{aligned} \hspace{\stretch{1}}(3.12)

In particular, if $b$ and $d$ are matrices of scalars we have

\begin{aligned}\mathbf{x} \cdot \mathbf{y} &= d^\text{T} \left\langle{{C^\dagger A}}\right\rangle b = d^\text{T} \frac{C^\dagger A + A^\dagger C}{2} b \\ \mathbf{x} \wedge \mathbf{y} &= d^\text{T} {\left\langle{{C^\dagger A}}\right\rangle}_{2} b = d^\text{T} \frac{C^\dagger A - A^\dagger C}{2} b.\end{aligned} \hspace{\stretch{1}}(3.14)

The dot product is seen as a generator of symmetric matrices, and the wedge product a generator of purely antisymmetric matrices.

## Back to the problem

Now, returning to the example above, where we want $\mathbf{R}^2$. We’ve seen that we can drop any bivector terms from the matrix, so that the squared length can be reduced as

\begin{aligned}\mathbf{R}^2 &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}1 & - e^{i(\theta-\phi)} & 0 \\ - e^{i(\phi -\theta)} & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}1 & - \cos(\theta-\phi) & 0 \\ - \cos(\theta -\phi) & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}\rho - a \cos(\theta - \phi) \\ - \rho \cos(\theta - \phi) + a \\ z - h\end{bmatrix}\end{aligned}

So we have

\begin{aligned}\mathbf{R}^2 = \rho^2 + a^2 + (z -h)^2 - 2 a \rho \cos(\theta - \phi)\end{aligned} \hspace{\stretch{1}}(3.16)

Now consider the velocity of the charged particle. We can write this as

\begin{aligned}\frac{d \mathbf{x}_c}{dt} = \begin{bmatrix}\mathbf{e}_3 & \mathbf{e}_1 e^{i \theta} & \mathbf{e}_2 e^{i\theta}\end{bmatrix}\begin{bmatrix}\dot{h} \\ \dot{a} \\ a \dot{\theta}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.17)

To compute $\mathbf{v}_c \cdot \mathbf{R}$ we have to extract scalar grades of the matrix product

\begin{aligned}\left\langle{{\begin{bmatrix}\mathbf{e}_1 e^{i\phi} \\ - \mathbf{e}_1 e^{i\theta} \\ \mathbf{e}_3\end{bmatrix}\begin{bmatrix}\mathbf{e}_3 & \mathbf{e}_1 e^{i \theta} & \mathbf{e}_2 e^{i\theta}\end{bmatrix}}}\right\rangle&=\left\langle{{\begin{bmatrix}\mathbf{e}_1 e^{i\phi} \\ - \mathbf{e}_1 e^{i\theta} \\ \mathbf{e}_3\end{bmatrix}\begin{bmatrix}\mathbf{e}_3 & \mathbf{e}_1 e^{i \theta} & \mathbf{e}_2 e^{i\theta}\end{bmatrix}}}\right\rangle \\ &=\left\langle{{\begin{bmatrix}\mathbf{e}_1 e^{i\phi} \mathbf{e}_3 & \mathbf{e}_1 e^{i\phi} \mathbf{e}_1 e^{i \theta} & \mathbf{e}_1 e^{i\phi} \mathbf{e}_2 e^{i\theta} \\ - \mathbf{e}_1 e^{i\theta} \mathbf{e}_3 & - \mathbf{e}_1 e^{i\theta} \mathbf{e}_1 e^{i \theta} & - \mathbf{e}_1 e^{i\theta} \mathbf{e}_2 e^{i\theta} \\ \mathbf{e}_3 \mathbf{e}_3 & \mathbf{e}_3 \mathbf{e}_1 e^{i \theta} & \mathbf{e}_3 \mathbf{e}_2 e^{i\theta} \\ \end{bmatrix}}}\right\rangle \\ &= \begin{bmatrix}0 & \cos(\theta-\phi) & - \sin(\theta - \phi) \\ 0 & - 1 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix}.\end{aligned}

So the dot product is

\begin{aligned}\mathbf{R} \cdot \mathbf{v} &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}0 & \cos(\theta-\phi) & - \sin(\theta - \phi) \\ 0 & - 1 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix}\begin{bmatrix}\dot{h} \\ \dot{a} \\ a \dot{\theta}\end{bmatrix} \\ &=\begin{bmatrix}\rho &a & (z - h)\end{bmatrix}\begin{bmatrix}\dot{a} \cos(\theta - \phi) - a \dot{\theta} \sin(\theta - \phi) \\ - \dot{a} \\ \dot{h}\end{bmatrix} \\ &=(z - h) \dot{h} - \dot{a} a + \rho \dot{a} \cos(\theta - \phi) - \rho a \dot{\theta} \sin(\theta - \phi) \end{aligned}

This is the last of what we needed for the potentials, so we have

\begin{aligned}A^0 &= \frac{q}{\sqrt{\rho^2 + a^2 + (z -h)^2 - 2 a \rho \cos(\theta - \phi)} -(z - h) \dot{h}/c + a \dot{a}/c + \rho \cos(\theta - \phi) \dot{a}/c - \rho a \sin(\theta - \phi) \dot{\theta}/c} \\ \mathbf{A} &= \frac{ \dot{h} \mathbf{e}_3 + (\dot{a} \mathbf{e}_1 + a \dot{\theta} \mathbf{e}_2) e^{i\theta} }{c} A^0,\end{aligned} \hspace{\stretch{1}}(3.18)

where all the time dependent terms in the potentials are evaluated at the retarded time $t_r$, defined implicitly by the messy relationship

\begin{aligned}c(t - t_r) = \sqrt{(\rho(t_r))^2 + (a(t_r))^2 + (z -h(t_r))^2 - 2 a(t_r) \rho \cos(\theta(t_r) - \phi)} .\end{aligned} \hspace{\stretch{1}}(3.20)

# Doing this calculation with plain old cylindrical coordinates.

It’s worth trying this same calculation without any geometric algebra to contrast it. I’d expect that the same sort of factorization could also be performed. Let’s try it

\begin{aligned}\mathbf{x}_c &= \begin{bmatrix}a \cos\theta \\ a \sin\theta \\ h\end{bmatrix}\\ \mathbf{r} &= \begin{bmatrix}\rho \cos\phi \\ \rho \sin\phi \\ z\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.21)

\begin{aligned}\mathbf{R} &= \mathbf{r} - \mathbf{x}_c \\ &= \begin{bmatrix}\rho \cos\phi - a \cos\theta \\ \rho \sin\phi - a \sin\theta \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\cos\phi & - \cos\theta & 0 \\ \sin\phi & - \sin\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix}\end{aligned}

So for $\mathbf{R}^2$ we really just need to multiply out two matrices

\begin{aligned}\begin{bmatrix}\cos\phi & \sin\phi & 0 \\ -\cos\theta & - \sin\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}\cos\phi & - \cos\theta & 0 \\ \sin\phi & - \sin\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}&=\begin{bmatrix}\cos^2\phi + \sin^2\phi & -(\cos\phi \cos\phi + \sin\phi \sin\theta) & 0 \\ -(\cos\phi \cos\theta + \sin\theta \sin\phi) & \cos^2\theta + \sin^2\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ &=\begin{bmatrix}1 & - \cos(\phi - \theta) & 0 \\ - \cos(\phi - \theta) & 1 & 0 \\ 0 & 0 & 1\end{bmatrix} \\ \end{aligned}

So for $\mathbf{R}^2$ we have

\begin{aligned}\mathbf{R}^2&=\begin{bmatrix}\rho & a & (z -h) \end{bmatrix}\begin{bmatrix}1 & - \cos(\phi - \theta) & 0 \\ - \cos(\phi - \theta) & 1 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\rho & a & (z -h) \end{bmatrix}\begin{bmatrix}\rho - a \cos(\phi - \theta) \\ -\rho \cos(\phi - \theta) + a \\ z - h\end{bmatrix} \\ &= (z - h)^2 + \rho^2 + a^2 - 2 a \rho \cos(\phi - \theta)\end{aligned}

We get the same result this way, as expected. The matrices of multivector products provide a small computational savings, since we don’t have to look up the $\cos\phi \cos\phi + \sin\phi \sin\theta = \cos(\phi - \theta)$ identity, but other than that minor detail, we get the same result.

For the particle velocity we have

\begin{aligned}\mathbf{v}_c &= \begin{bmatrix}\dot{a} \cos\theta - a \dot{\theta} \sin\theta \\ \dot{a} \sin\theta + a \dot{\theta} \cos\theta \\ \dot{h} \end{bmatrix} \\ &=\begin{bmatrix}\cos\theta & - \sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix}\dot{a} \\ a \dot{\theta} \\ \dot{h} \end{bmatrix}\end{aligned}

So the dot product is

\begin{aligned}\mathbf{v}_c \cdot \mathbf{R} &=\begin{bmatrix}\dot{a} & a \dot{\theta} & \dot{h} \end{bmatrix}\begin{bmatrix}\cos\theta & \sin\theta & 0 \\ -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\phi & - \cos\theta & 0 \\ \sin\phi & - \sin\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\dot{a} & a \dot{\theta} & \dot{h} \end{bmatrix}\begin{bmatrix}\cos\theta \cos\phi + \sin\theta \sin\phi & -\cos^2 \theta - \sin^2 \theta & 0 \\ -\cos\phi \sin\theta + \cos\theta \sin\phi & 0 \\ 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\begin{bmatrix}\dot{a} & a \dot{\theta} & \dot{h} \end{bmatrix}\begin{bmatrix}\cos(\phi - \theta) & -1 & 0 \\ \sin(\phi - \theta) & 0 & 0 \\ 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\rho \\ a \\ z - h\end{bmatrix} \\ &=\dot{h}(z - h) - \dot{a} a + \rho \dot{a} \cos(\phi - \theta) + \rho a \dot{\theta} \sin(\phi - \theta)\end{aligned}

# Reflecting on two the calculation methods.

With a learning curve to both Geometric Algebra, and overhead required for this new multivector matrix formalism, it is definitely not a clear winner as a calculation method. Having worked a couple examples now this way, the first being the N spherical pendulum problem, and now this potentials problem, I’ll keep my eye out for new opportunities. If nothing else this can be a useful private calculation tool, and the translation into more pedestrian matrix methods has been seen in both cases to not be too difficult.

# References

[1] Peeter Joot. Spherical polar pendulum for one and multiple masses (Take II) [online]. http://sites.google.com/site/peeterjoot/math2009/multiPendulumSpherical2.pdf.

[2] Peeter Joot. {Lagrangian and Euler-Lagrange equation evaluation for the spherical N-pendulum problem} [online]. http://sites.google.com/site/peeterjoot/math2009/multiPendulumSphericalMatrix.pdf.