# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 293,785

# Archive for September, 2009

## turning off the gdb pager to collect gobs of info.

Posted by peeterjoot on September 30, 2009

I’ve got stuff interrupted with the debugger, so I can’t invoke our external tool to collect stacks. Since gdb doesn’t have redirect for most commands here’s how I was able to collect all my stacks, leaving my debugger attached:

(gdb) set height 0
(gdb) set logging on


Now I can go edit gdb.txt when it finishes (in the directory where I initially attached the debugger to my pid), and examine things. A small tip, but it took me 10 minutes to figure out how to do it (yet again), so it’s worth jotting down for future reference.

Posted in debugging | Tagged: | 2 Comments »

## Restricting pattern replacement to specific lines?

Posted by peeterjoot on September 29, 2009

We’ve had a marketing driven name change that impacts a lot of internal error strings in our code in a silly way, and I’ve got a lot of stuff like the following output

# grep -n '".* BA' *C
foo.C:1486:                       "Unexpected BA error - panic",
foo.C:1561:                       "Unexpected BA error - panic",
foo.C:1569:                       "Unexpected BA error",
...

All these ‘BA’s have to be changed to ‘BF’.

Ideally no customers would ever see these developer centric messages, but they will be in our logfiles, and potentially visible and confusing.

It’s not too hard to replace these, but there’s a lot of them. I’ve had this kind of task before, and have done it using hacky throw away command line “one-liners” like the following:

for i in cut -f1,2 -d: grepoutput ; do
f=echo $i | cut -f1 -d: l=echo$i | cut -f2 -d:
vim +$l +:'s/\/BF/g' +wq$i
done

Okay, it’s not a one liner as above since I’ve formatted this with newlines instead of semicolons, but when tossing off throwaway bash/ksh for loop stuff like this I usually do it as a one liner. This bash loop is easy enough to write, but messy and also fairly easy to get wrong. I’m tired of doing this over and over again.

It seemed to me that it was time to code up something that I can tweak for automated tasks like this, and wrote the perl script below that consumes grep -n ouput (ie. file:lineno:stuff output), and makes the desired replacements, whatever they are. I’ve based this strictly on the grep output because the unrestricted replacements could be dangerous and I wanted to visually verify that all the replacement sites were appropriate.

#!/usr/bin/perl

my %lines ;

while (<>)
{
chomp ;

/^(.*?):(\d+?):/ or die "unexpected grep -n output on line '$_'\n" ;$lines{$1} .= ",$2" ;
}

foreach (keys %lines)
{
process_file( $_, split(/,/,$lines{$_} ) ) ; } exit ; sub process_file { my ($filename, @n) = @_ ;

my %thisLines = map { $_ => 1 } @n ; open my$fhIn, "<$filename" or die "could not open file for input '$filename'\n" ;
open my $fhOut, ">$filename.2" or die "could not open file for input '$filename.2'\n" ; my$lineno = 0 ;
while ( <$fhIn> ) {$lineno++ ;

if ( exists($thisLines{$lineno}) )
{
#print "$filename:$lineno: operating on: '$_'\n" ; # word delimiters to replace BA but not BASH nor ABAB, ... s/\bBA\b/BF/g ; } print$fhOut $_ ; } close$fhIn ;
close fhOut ; }  This little script, while certainly longer than the one-liner method, is fairly straightforward and easy to modify for other similar ad-hoc replacement tasks later. However, I have to wonder if there’s an easier way? Posted in perl and general scripting hackery | Tagged: , , | 2 Comments » ## Electromagnetic Gauge invariance. Posted by peeterjoot on September 24, 2009 [Click here for a PDF of this post with nicer formatting] At the end of section 12.1 in Jackson [1] he states that it is obvious that the Lorentz force equations are gauge invarient. \begin{aligned}\frac{d \mathbf{p}}{dt} &= e \left( \mathbf{E} + \frac{\mathbf{u}}{c} \times \mathbf{B} \right) \\ \frac{d E}{dt} &= e \mathbf{u} \cdot \mathbf{E} \end{aligned} \quad\quad\quad(1) Since I didn’t remember what Gauge invariance was, it wasn’t so obvious. But if I looking ahead to one of the problem 12.2 on this invariance we have a Gauge transformation defined in four vector form as \begin{aligned}A^\alpha \rightarrow A^\alpha + \partial^\alpha \psi\end{aligned} \quad\quad\quad(3) In vector form with $A = \gamma_\alpha A^\alpha$, this gauge transformation can be written \begin{aligned}A \rightarrow A + \nabla \psi\end{aligned} \quad\quad\quad(4) so this is really a statement that we add a spacetime gradient of something to the four vector potential. Given this, how does the field transform? \begin{aligned}F &= \nabla \wedge A \\ &\rightarrow \nabla \wedge (A + \nabla \psi) \\ &= F + \nabla \wedge \nabla \psi\end{aligned} But $\nabla \wedge \nabla \psi = 0$ (assuming partials are interchangable) so the field is invariant regardless of whether we are talking about the Lorentz force \begin{aligned}\nabla F = J/\epsilon_0 c\end{aligned} \quad\quad\quad(5) or the field equations themselves \begin{aligned}\frac{dp}{d\tau} = e F \cdot v/c\end{aligned} \quad\quad\quad(6) So, once you know the definition of the gauge transformation in four vector form, yes this justifiably obvious, however, to anybody who is not familiar with Geometric Algebra, perhaps this is still not so obvious. How does this translate to the more common place tensor or space time vector notations? The tensor four vector translation is the easier of the two, and there we have \begin{aligned}F^{\alpha\beta} &= \partial^\alpha A^\beta -\partial^\beta A^\alpha \\ &\rightarrow \partial^\alpha (A^\beta + \partial^\beta \psi) -\partial^\beta (A^\alpha + \partial^\alpha \psi) \\ &= F^{\alpha\beta} + \partial^\alpha \partial^\beta \psi -\partial^\beta \partial^\alpha \psi \\ \end{aligned} As required for $\nabla \wedge \nabla \psi = 0$ interchange of partials means the field components $F^{\alpha\beta}$ are unchanged by adding this gradient. Finally, in plain old spatial vector form, how is this gauge invariance expressed? In components we have \begin{aligned}A^0 &\rightarrow A^0 + \partial^0 \psi = \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t} \\ A^k &\rightarrow A^k + \partial^k \psi = A^k - \frac{\partial \psi}{\partial x^k}\end{aligned} \quad\quad\quad(7) This last in vector form is $\mathbf{A} \rightarrow \mathbf{A} - \boldsymbol{\nabla} \psi$, where the sign inversion comes from $\partial^k = -\partial_k = -\partial/\partial x^k$, assuming a $+---$ metric. We want to apply this to the electric and magnetic field components \begin{aligned}\mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{1}{{c}}\frac{\partial \mathbf{A}}{\partial t} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \quad\quad\quad(9) The electric field transforms as \begin{aligned}\mathbf{E} &\rightarrow -\boldsymbol{\nabla} \left( \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t}\right) - \frac{1}{{c}}\frac{\partial }{\partial t} \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{E} -\frac{1}{{c}} \boldsymbol{\nabla} \frac{\partial \psi}{\partial t} + \frac{1}{{c}}\frac{\partial }{\partial t} \boldsymbol{\nabla} \psi \end{aligned} With partial interchange this is just $\mathbf{E}$. For the magnetic field we have \begin{aligned}\mathbf{B} &\rightarrow \boldsymbol{\nabla} \times \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{B} - \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi \end{aligned} Again since the partials interchange we have $\boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi = 0$, so this is just the magnetic field. Alright. Worked this in three different ways, so now I can say its obvious. # References [1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975. ## grep with a range using a perl one liner. Posted by peeterjoot on September 23, 2009 Here’s a small shell scripting problem. I have grep output that I’d like further filtered # ./displayInfo | grep peeterj Offline RG:blah_peeterj_0-rg Offline RG:blah_peeterj_0-rg_MLN-rg Offline RG:idle_peeterj_998_goo-rg Offline RG:primary_peeterj_900-rg Online Equivalency:blah_peeterj_0-rg_MLN-rg_group-equ ... MORE STUFF I DON'T CARE ABOUT. ... Online Equivalency:instancehost_peeterj-equ '- Online APP:instancehost_peeterj_goo:goo Online Equivalency:primary_peeterj_900-rg_group-equ peeterj is my userid and the command in question that generates this has output for all the other userids on the system. I’m only interested in the subset of the info that has my name, hence the grep. However, once the text Equivalency is displayed I’m not interested in any more of it. I could filter out all the patterns that I’m also not interested in doing something like: # ./displayInfo | grep peeterj | grep -v -e Equivalency -e instancehost -e ... but this is a bit cumbersome. An alternative is filtering specifically on what I want # ./displayInfo | grep -e 'peeterj.*blah' -e 'peeterj.*idle' -e 'primary_peeterj' -e ... but that also means I have to know and enumerate all such expressions for what I’m interested in. Since I’m on Linux my grep is gnu-grep, so I considered using ‘grep -B N’ to show N lines of text precededing a match, but this also outputs the text I’m not interested in so doesn’t really work. Here’s what I came up with (I’m sure there’s lot of ways, some perhaps easier, but I liked this one). It uses the perl filtering option -n once again, to convert the entire script into a filter (specified here inline with -e instead of in a file) : # ./displayInfo | perl -n -e 'next unless (/ENV{USER}/) ; last if ( /Equivalency/ ) ; print ;'

Basically, this one liner is as if I’d written the following perl script to read and process all of STDIN:

#/usr/bin/perl

while (<>)
{
my $line =$_ ;
next unless ($line =~ /$ENV{USER}/) ; # "grep" for my userid (peeterj).

# Won't get here until I've started seeing the peeterj text.
last if ( $line =~ /Equivalency/ ) ; # stop when I've seen enough. # If I get this far print the "matched" line as is: print$line ;
}

## Lorentz force from Lagrangian (non-covariant)

Posted by peeterjoot on September 22, 2009

# Motivation

Jackson [1] gives the Lorentz force non-covariant Lagrangian

\begin{aligned}L = - m c^2 \sqrt{1 -\mathbf{u}^2/c^2} + \frac{e}{c} \mathbf{u} \cdot \mathbf{A} - e \phi\end{aligned} \quad\quad\quad(1)

and leaves it as an exercise for the reader to verify that this produces the Lorentz force law. Felt like trying this anew since I recall having trouble the first time I tried it (the covariant derivation was easier).

# Guts

Jackson gives a tip to use the convective derivative (yet another name for the chain rule), and using this in the Euler Lagrange equations we have

\begin{aligned}\boldsymbol{\nabla} \mathcal{L} = \frac{d}{dt} \boldsymbol{\nabla}_\mathbf{u} \mathcal{L} = \left( \frac{\partial}{\partial t} + \mathbf{u} \cdot \boldsymbol{\nabla} \right) \sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}\end{aligned} \quad\quad\quad(2)

where $\{\sigma_a\}$ is the spatial basis. The first order of business is calculating the gradient and conjugate momenta. For the latter we have

\begin{aligned}\sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}&=\sigma_a \left(- m c^2 \gamma \frac{1}{{2}} (-2) \dot{x}^a/c^2 + \frac{e}{c} A^a \right) \\ &=m \gamma \mathbf{u} + \frac{e}{c} \mathbf{A} \\ &\equiv \mathbf{p} + \frac{e}{c}\mathbf{A}\end{aligned}

Applying the convective derivative we have

\begin{aligned}\frac{d}{dt} \sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}&=\frac{d\mathbf{p}}{dt} + \frac{e}{c} \frac{\partial \mathbf{A}}{\partial t}+ \frac{e}{c} \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A}\end{aligned}

\begin{aligned}\sigma_a \frac{\partial \mathcal{L}}{\partial x^a} = e\left( \frac{1}{{c}}\dot{x}^b \boldsymbol{\nabla} A^b - \boldsymbol{\nabla} \phi \right)\end{aligned}

Rearranging 2 for this Lagrangian we have

\begin{aligned}\frac{d\mathbf{p}}{dt} =e \left( - \boldsymbol{\nabla} \phi- \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}- \frac{1}{c} \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A} +\frac{1}{{c}} \dot{x}^b \boldsymbol{\nabla} A^b \right)\end{aligned}

The first two terms are the electric field

\begin{aligned}\mathbf{E} \equiv- \boldsymbol{\nabla} \phi- \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}\end{aligned}

So it remains to be shown that the remaining two equal $(\mathbf{u}/c) \times \mathbf{B} = (\mathbf{u}/c) \times (\boldsymbol{\nabla} \times \mathbf{A})$. Using the Hestenes notation using primes to denote what the gradient is operating on, we have

\begin{aligned}\dot{x}^b \boldsymbol{\nabla} A^b - \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A}&=\boldsymbol{\nabla}' \mathbf{u} \cdot \mathbf{A}' - \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A} \\ &=-\mathbf{u} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A}) \\ &=\frac{1}{{2}} \left((\boldsymbol{\nabla} \wedge \mathbf{A}) \mathbf{u} -\mathbf{u} (\boldsymbol{\nabla} \wedge \mathbf{A}) \right) \\ &=\frac{I}{2} \left((\boldsymbol{\nabla} \times \mathbf{A}) \mathbf{u} -\mathbf{u} (\boldsymbol{\nabla} \times \mathbf{A}) \right) \\ &=-I (\mathbf{u} \wedge \mathbf{B}) \\ &=-I I (\mathbf{u} \times \mathbf{B}) \\ &=\mathbf{u} \times \mathbf{B} \\ \end{aligned}

I’ve used the Geometric Algebra identities I’m familiar with to regroup things, but this last bit can likely be done with index manipulation too. The exercise is complete, and we have from the Lagrangian

\begin{aligned}\frac{d\mathbf{p}}{dt} = e \left( \mathbf{E} + \frac{1}{{c}} \mathbf{u} \times \mathbf{B} \right)\end{aligned} \quad\quad\quad(3)

# References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , | 2 Comments »

## Workplace Security Check Pass

Posted by peeterjoot on September 21, 2009

I got the following today in my email:

Workplace Security Check Pass

Why you are receiving this notification:
Peeter Joot, you have received this notice because your workplace was reviewed during an after hours Workplace Security Check. Your workplace passed the test. Workplace Security passes are reported to SWG Executives on a monthly basis.

I could enumerate many ways that security is busted in DB2 development here at work. In fact I wrote a big rant about security stupidity policy at IBM last week after having to crack root on some of our Unix machines since they don’t give us the required access anymore. I refrained from posting it since it was too sad — it was fun to write though.

However, I now pass the bogus checking that is reported to the executives. I failed last time for leaving blank scrap paper on my desk. I now hide my blank scrap paper behind a big cheerios box on my book shelf.

They also didn’t notice one of the security violations they got me for last time … leaving the key in my empty laptop docking station. A clever coworker was able to figure out how that could be a security risk (I couldn’t) : you could sneak in after hours, duplicate the key, and then steal the laptop later when I’m out of my cubicle on a bathroom or coffee fetching break. None of that is required of course since I never actually take the key out of the docking station to lock it when I leave my cube, also violating policy. Because of the security checking I took to hiding the docking station key under a coffee cup for a while when I left for the day (and know others who do the same).

Most of these checks are totally pointless IMO. The biggest and only risk in my opinion is insider action. Any student intern can walk out with all important bits of DB2’s source code on a USB key, and I’ll bet countless people in the lab leave home every day with the code on their laptops with easily cracked passwords. Has our code ever been maliciously acquired by a compeditor this way. I doubt it. Even if they got it, we have enough trouble figuring it out ourself. Without guidance, build infrastructure, design plans, regression and performance and system test infrastructure, and on and on, the direct utility of our code in vanilla state is somewhat minimized. That isn’t even counting the angry pack of vicious IBM lawyers that would be hunting you down if you tried to steal or sell or use the code in underhand ways.

I don’t know how corporate security works in other companies, but in mine it seems to be all about appearances. There are a set of rules defined that give some security executive a happy feeling. So long as everybody reports up the command chain that they are following the rules corporate security is intact and the security execs are happy. The rules don’t actually have to be followed and the enforcement is often not there, but management has to be told or pretend to be told that the rules have been followed and then everybody is happy.

Posted in Incoherent ramblings | 1 Comment »

## Spherical Polar unit vectors in exponential form.

Posted by peeterjoot on September 20, 2009

# Motivation

In [1] I blundered on a particularly concise exponential non-coordinate form for the unit vectors in a spherical polar coordinate system. For future reference outside of a quantum mechanical context here is a separate and more concise iteration of these results.

# The rotation and notation.

The spherical polar rotor is a composition of rotations, expressed as half angle exponentials. Following the normal physics conventions we first apply a $z,x$ plane rotation by angle theta, then an $x,y$ plane rotation by angle $\phi$. This produces the rotor

\begin{aligned}R = e^{\mathbf{e}_{31}\theta/2} e^{\mathbf{e}_{12}\phi/2}\end{aligned} \quad\quad\quad(1)

Our triplet of Cartesian unit vectors is therefore rotated as

\begin{aligned}\begin{pmatrix}\hat{\mathbf{r}} \\ \hat{\boldsymbol{\theta}} \\ \hat{\boldsymbol{\phi}} \\ \end{pmatrix}&=\tilde{R}\begin{pmatrix}\mathbf{e}_3 \\ \mathbf{e}_1 \\ \mathbf{e}_2 \\ \end{pmatrix}R\end{aligned} \quad\quad\quad(2)

In the quantum mechanical context it was convenient to denote the $x,y$ plane unit bivector with the imaginary symbol

\begin{aligned}i = \mathbf{e}_1 \mathbf{e}_2\end{aligned} \quad\quad\quad(3)

reserving for the spatial pseudoscalar the capital

\begin{aligned}I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 = \hat{\mathbf{r}} \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} = i \mathbf{e}_3\end{aligned} \quad\quad\quad(4)

Note the characteristic differences between these two “imaginaries”. The planar quantity $i = \mathbf{e}_1 \mathbf{e}_2$ commutes with $\mathbf{e}_3$, but anticommutes with either $\mathbf{e}_1$ or $\mathbf{e}_2$. On the other hand the spatial pseudoscalar $I$ commutes with any vector, bivector or trivector in the algebra.

# Application of the rotor. The spherical polar unit vectors.

Having fixed notation, lets apply the rotation to each of the unit vectors in sequence, starting with the calculation for $\hat{\boldsymbol{\phi}}$. This is

\begin{aligned}\hat{\boldsymbol{\phi}} &= e^{-i \phi/2} e^{-\mathbf{e}_{31}\theta/2} (\mathbf{e}_2) e^{\mathbf{e}_{31}\theta/2} e^{i\phi/2} \\ &= \mathbf{e}_2 e^{i\phi} \end{aligned}

Here, since $\mathbf{e}_2$ commutes with the rotor bivector $\mathbf{e}_3 \mathbf{e}_1$ the innermost exponentials cancel, leaving just the $i\phi$ rotation. For $\hat{\mathbf{r}}$ it is a bit messier, and we have

\begin{aligned}\hat{\mathbf{r}} &= e^{-i \phi/2} e^{-\mathbf{e}_{31}\theta/2} (\mathbf{e}_3) e^{\mathbf{e}_{31}\theta/2} e^{i\phi/2} \\ &= e^{-i \phi/2} \mathbf{e}_3 e^{\mathbf{e}_{31}\theta} e^{i\phi/2} \\ &= e^{-i \phi/2} (\mathbf{e}_3 \cos\theta + \mathbf{e}_1 \sin\theta) e^{i\phi/2} \\ &= \mathbf{e}_3 \cos\theta + \mathbf{e}_1 \sin\theta e^{i\phi} \\ &= \mathbf{e}_3 \cos\theta + \mathbf{e}_1 \mathbf{e}_2 \sin\theta \mathbf{e}_2 e^{i\phi} \\ &= \mathbf{e}_3 \cos\theta + i \sin\theta \hat{\boldsymbol{\phi}} \\ &= \mathbf{e}_3 (\cos\theta + \mathbf{e}_3 i \sin\theta \hat{\boldsymbol{\phi}}) \\ &= \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}}\theta} \end{aligned}

Finally for $\hat{\boldsymbol{\theta}}$, we have a similar messy expansion

\begin{aligned}\hat{\boldsymbol{\theta}} &= e^{-i \phi/2} e^{-\mathbf{e}_{31}\theta/2} (\mathbf{e}_1) e^{\mathbf{e}_{31}\theta/2} e^{i\phi/2} \\ &= e^{-i \phi/2} \mathbf{e}_1 e^{\mathbf{e}_{31}\theta} e^{i\phi/2} \\ &= e^{-i \phi/2} (\mathbf{e}_1 \cos\theta - \mathbf{e}_3 \sin\theta) e^{i\phi/2} \\ &= \mathbf{e}_1 \cos\theta e^{i\phi} - \mathbf{e}_3 \sin\theta \\ &= i \cos\theta \mathbf{e}_2 e^{i\phi} - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} \cos\theta - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} (\cos\theta + \hat{\boldsymbol{\phi}} i \mathbf{e}_3 \sin\theta) \\ &= i \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}}\theta}\end{aligned}

Summarizing the three of these relations we have for the rotated unit vectors

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\theta}} &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{i\phi} \end{aligned} \quad\quad\quad(5)

and in particular for the radial position vector from the origin, rotating from the polar axis, we have

\begin{aligned}\mathbf{x} &= r \hat{\mathbf{r}} = r \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}} \theta}\end{aligned} \quad\quad\quad(8)

Compare this to the coordinate representation

\begin{aligned}\mathbf{x} = r(\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)\end{aligned} \quad\quad\quad(9)

it is not initially obvious that these $\theta$ and $\phi$ rotations admit such a tidy factorization.

# A small example application.

Let’s use these results to compute the spherical polar volume element. Pictorially this can be read off simply from a diagram. If one is less trusting of pictorial means (or want a method more generally applicable), we can also do this particular calculation algebraically, expanding the determinant of partials

\begin{aligned}\begin{vmatrix}\frac{\partial \mathbf{x}}{\partial r} & \frac{\partial \mathbf{x}}{\partial \theta} & \frac{\partial \mathbf{x}}{\partial \phi} \\ \end{vmatrix} dr d\theta d\phi&=\begin{vmatrix}\sin\theta \cos\phi & \cos\theta \cos\phi & -\sin\theta \sin\phi \\ \sin\theta \sin\phi & \cos\theta \sin\phi & \sin\theta \cos\phi \\ \cos\theta & -\sin\theta & 0 \\ \end{vmatrix} r^2 dr d\theta d\phi\end{aligned} \quad\quad\quad(10)

One can chug through the trig reduction for this determinant with not too much trouble, but it isn’t particularly fun.

Now compare to the same calculation proceeding directly with the exponential form. We do still need to compute the partials

\begin{aligned}\frac{\partial \mathbf{x}}{\partial r} = \hat{\mathbf{r}}\end{aligned}

\begin{aligned}\frac{\partial \mathbf{x}}{\partial \theta} &= r \mathbf{e}_3 \frac{\partial }{\partial \theta} e^{I\hat{\boldsymbol{\phi}} \theta} \\ &= r \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \\ &= r \hat{\mathbf{r}} (\hat{\mathbf{r}} \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}}) \hat{\boldsymbol{\phi}} \\ &= r \hat{\boldsymbol{\theta}}\end{aligned}

\begin{aligned}\frac{\partial \mathbf{x}}{\partial \phi} &= r \mathbf{e}_3 \frac{\partial }{\partial \phi} (\cos\theta + I\hat{\boldsymbol{\phi}} \sin\theta) \\ &= -r \mathbf{e}_3 I i \hat{\boldsymbol{\phi}} \sin\theta \\ &= r \hat{\boldsymbol{\phi}} \sin\theta \end{aligned}

So the area element, the oriented area of the parallelogram between the two vectors $d\theta \partial \mathbf{x}/\partial \theta$, and $d\phi \partial \mathbf{x}/\partial \phi$ on the spherical surface at radius $r$ is

\begin{aligned}d\mathbf{S} = \left(d\theta \frac{\partial \mathbf{x}}{\partial \theta}\right) \wedge \left( d\phi \frac{\partial \mathbf{x}}{\partial \phi} \right) = r^2 \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \sin\theta d\theta d\phi\end{aligned} \quad\quad\quad(11)

and the volume element in trivector form is just the product

\begin{aligned}d\mathbf{V} = \left(dr\frac{\partial \mathbf{x}}{\partial r}\right) \wedge d\mathbf{S}= r^2 \sin\theta I dr d\theta d\phi\end{aligned} \quad\quad\quad(12)

# References

[1] Peeter Joot. Bivector form of quantum angular momentum operator [online]. http://sites.google.com/site/peeterjoot/math2009/qmAngularMom.pdf.

## Bivector Geometry in Geometric Algebra.

Posted by peeterjoot on September 20, 2009

# Motivation.

Consider the derivative of a vector parametrized bivector square such as

\begin{aligned}\frac{d}{d\lambda} {(\mathbf{x} \wedge \mathbf{k})^2} = \left(\frac{d\mathbf{x}}{d\lambda} \wedge \mathbf{k}\right) \left(\mathbf{x} \wedge \mathbf{k}\right)+\left(\mathbf{x} \wedge \mathbf{k}\right) \left(\frac{d \mathbf{x}}{d\lambda} \wedge \mathbf{k}\right)\end{aligned}

where $\mathbf{k}$ is constant. In this case, the left hand side is a scalar so the right hand side, this symmetric product of bivectors must also be a scalar. In the more general case, do we have any reason to assume a symmetric bivector product is a scalar as is the case for the symmetric vector product?

Here this question is considered, and examination of products of intersecting bivectors is examined. We take intersecting bivectors to mean that there a common vector ($\mathbf{k}$ above) can be factored from both of the two bivectors, leaving a vector remainder. Since all non coplanar bivectors in $\mathbb{R}^{3}$ intersect this examination will cover the important special case of three dimensional plane geometry.

A result of this examination is that many of the concepts familiar from vector geometry such as orthogonality, projection, and rejection will have direct bivector equivalents.

General bivector geometry, in spaces where non-coplanar bivectors do not necessarily intersect (such as in $\mathbb{R}^{4}$) is also considered. Some of the results require plane intersection, or become simpler in such circumstances. This will be pointed out when appropriate.

# Components of grade two multivector product.

The geometric product of two bivectors can be written:

\begin{aligned}\mathbf{A} \mathbf{B} = {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{0}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{4}}= {\mathbf{A} \cdot \mathbf{B}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}}\end{aligned} \quad\quad\quad(1)

\begin{aligned}\mathbf{B} \mathbf{A} = {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{0}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{4}}= {\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\mathbf{B} \wedge \mathbf{A}}\end{aligned} \quad\quad\quad(2)

Because we have three terms involved, unlike the vector dot and wedge product we cannot generally separate these terms by symmetric and antisymmetric parts. However forming those sums will still worthwhile, especially for the case of intersecting bivectors since the last term will be zero in that case.

## Sign change of each grade term with commutation.

Starting with the last term we can first observe that

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = \mathbf{B} \wedge \mathbf{A}\end{aligned} \quad\quad\quad(3)

To show this let $\mathbf{A} = \mathbf{a} \wedge \mathbf{b}$, and $\mathbf{B} = \mathbf{c} \wedge \mathbf{d}$. When

$\mathbf{A} \wedge \mathbf{B} \ne 0$, one can write:

\begin{aligned}\mathbf{A} \wedge \mathbf{B} &= \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \wedge \mathbf{d} \\ &= - \mathbf{b} \wedge \mathbf{c} \wedge \mathbf{d} \wedge \mathbf{a} \\ &= \mathbf{c} \wedge \mathbf{d} \wedge \mathbf{a} \wedge \mathbf{b} \\ &= \mathbf{B} \wedge \mathbf{A} \\ \end{aligned}

To see how the signs of the remaining two terms vary with commutation form:

\begin{aligned}(\mathbf{A} + \mathbf{B})^2&= (\mathbf{A} + \mathbf{B})(\mathbf{A} + \mathbf{B}) \\ &= \mathbf{A}^2 + \mathbf{B}^2 + \mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} \\ \end{aligned}

When $\mathbf{A}$ and $\mathbf{B}$ intersect we can write $\mathbf{A} = \mathbf{a} \wedge \mathbf{x}$, and $\mathbf{B} = \mathbf{b} \wedge \mathbf{x}$, thus the sum is a bivector

\begin{aligned}(\mathbf{A} + \mathbf{B})= (\mathbf{a} + \mathbf{b}) \wedge \mathbf{x}\end{aligned}

And so, the square of the two is a scalar. When $\mathbf{A}$ and $\mathbf{B}$ have only non intersecting components, such as the grade two $\mathbb{R}^{4}$ multivector $\mathbf{e}_{12} + \mathbf{e}_{34}$, the square of this sum will have both grade four and scalar parts.

Since the LHS = RHS, and the grades of the two also must be the same. This implies that the quantity

\begin{aligned}\mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} = \mathbf{A} \cdot \mathbf{B} + \mathbf{B} \cdot \mathbf{A}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}+\mathbf{A} \wedge \mathbf{B} + \mathbf{B} \wedge \mathbf{A}\end{aligned}

is a scalar $\iff$ $\mathbf{A} + \mathbf{B}$ is a bivector, and in general has scalar and grade four terms. Because this symmetric sum has no grade two terms, regardless of whether $\mathbf{A}$, and $\mathbf{B}$ intersect, we have:

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2} = 0\end{aligned}

\begin{aligned}\implies{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = -{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(4)

One would intuitively expect $\mathbf{A} \cdot \mathbf{B} = \mathbf{B} \cdot \mathbf{A}$. This can be demonstrated by forming the complete symmetric sum

\begin{aligned}\mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}} +{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}} + {\mathbf{B} \wedge \mathbf{A}} \\ &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}} -{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}} + {\mathbf{A} \wedge \mathbf{B}} \\ &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+2{\mathbf{A} \wedge \mathbf{B}} \\ \end{aligned}

The LHS commutes with interchange of $\mathbf{A}$ and $\mathbf{B}$, as does ${\mathbf{A} \wedge \mathbf{B}}$. So for the RHS to also commute, the remaining grade 0 term must also:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = \mathbf{B} \cdot \mathbf{A}\end{aligned} \quad\quad\quad(5)

## Dot, wedge and grade two terms of bivector product.

Collecting the results of the previous section and substituting back into equation 1 we have:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = {\left\langle{{\frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}}}\right\rangle}_{{0}}\end{aligned} \quad\quad\quad(6)

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = \frac{\mathbf{A} \mathbf{B} - \mathbf{B}\mathbf{A}}{2}\end{aligned} \quad\quad\quad(7)

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = {\left\langle{{\frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}}}\right\rangle}_{{4}}\end{aligned} \quad\quad\quad(8)

When these intersect in a line the wedge term is zero, so for that special case we can write:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = \frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}\end{aligned}

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = \frac{\mathbf{A} \mathbf{B} - \mathbf{B}\mathbf{A}}{2}\end{aligned}

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = 0\end{aligned}

(note that this is always the case for $\mathbb{R}^{3}$).

# Intersection of planes.

Starting with two planes specified parametrically, each in terms of two direction vectors and a point on the plane:

\begin{aligned}\mathbf{x} &= \mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v} \\ \mathbf{y} &= \mathbf{q} + a \mathbf{w} + b \mathbf{z} \\ \end{aligned} \quad\quad\quad(9)

If these intersect then all points on the line must satisfy $\mathbf{x} = \mathbf{y}$, so the solution requires:

\begin{aligned}\mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v} = \mathbf{q} + a \mathbf{w} + b \mathbf{z}\end{aligned}

\begin{aligned}\implies(\mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v}) \wedge \mathbf{w} \wedge \mathbf{z} = (\mathbf{q} + a \mathbf{w} + b \mathbf{z}) \wedge \mathbf{w} \wedge \mathbf{z} = \mathbf{q} \wedge \mathbf{w} \wedge \mathbf{z}\end{aligned}

Rearranging for $\beta$, and writing $\mathbf{B} = \mathbf{w} \wedge \mathbf{z}$:

\begin{aligned}\beta = \frac{\mathbf{q} \wedge \mathbf{B} - (\mathbf{p} + \alpha \mathbf{u}) \wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\end{aligned}

Note that when the solution exists the left vs right order of the division by $\mathbf{v} \wedge \mathbf{B}$ should not matter since the numerator will be proportional to this bivector (or else the $\beta$ would not be a scalar).

Substitution of $\beta$ back into $\mathbf{x} = \mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v}$ (all points in the first plane) gives you a parametric equation for a line:

\begin{aligned}\mathbf{x} = \mathbf{p} + \frac{(\mathbf{q}-\mathbf{p})\wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\mathbf{v} + \alpha\frac{1}{\mathbf{v} \wedge \mathbf{B}}((\mathbf{v} \wedge \mathbf{B}) \mathbf{u} - (\mathbf{u} \wedge \mathbf{B})\mathbf{v})\end{aligned}

Where a point on the line is:

\begin{aligned}\mathbf{p} + \frac{(\mathbf{q}-\mathbf{p})\wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\mathbf{v} \end{aligned}

And a direction vector for the line is:

\begin{aligned}\frac{1}{\mathbf{v} \wedge \mathbf{B}}((\mathbf{v} \wedge \mathbf{B}) \mathbf{u} - (\mathbf{u} \wedge \mathbf{B})\mathbf{v})\end{aligned}

\begin{aligned}\propto(\mathbf{v} \wedge \mathbf{B})^2 \mathbf{u} - (\mathbf{v} \wedge \mathbf{B})(\mathbf{u} \wedge \mathbf{B})\mathbf{v}\end{aligned}

Now, this result is only valid if $\mathbf{v} \wedge \mathbf{B} \ne 0$ (ie: line of intersection is not directed along $\mathbf{v}$), but if that is the case the second form will be zero. Thus we can add the results (or any non-zero linear combination of) allowing for either of $\mathbf{u}$, or $\mathbf{v}$ to be directed along the line of intersection:

\begin{aligned}a\left( (\mathbf{v} \wedge \mathbf{B})^2 \mathbf{u}- (\mathbf{v} \wedge \mathbf{B})(\mathbf{u} \wedge \mathbf{B})\mathbf{v} \right)+ b\left((\mathbf{u} \wedge \mathbf{B})^2 \mathbf{v} - (\mathbf{u} \wedge \mathbf{B})(\mathbf{v} \wedge \mathbf{B})\mathbf{u}\right)\end{aligned} \quad\quad\quad(12)

Alternately, one could formulate this in terms of $\mathbf{A} = \mathbf{u} \wedge \mathbf{v}$, $\mathbf{w}$, and $\mathbf{z}$. Is there a more symmetrical form for this direction vector?

## Vector along line of intersection in $\mathbb{R}^{3}$

For $\mathbb{R}^{3}$ one can solve the intersection problem using the normals to the planes. For simplicity put the origin on the line of intersection (and all planes through a common point in $\mathbb{R}^{3}$ have at least a line of intersection). In this case, for bivectors $\mathbf{A}$ and $\mathbf{B}$, normals to those planes are $i\mathbf{A}$, and $i\mathbf{B}$ respectively. The plane through both of those normals is:

\begin{aligned}(i\mathbf{A}) \wedge (i\mathbf{B})= \frac{(i\mathbf{A})(i\mathbf{B}) - (i\mathbf{B})(i\mathbf{A})}{2} = \frac{\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B}}{2} = {\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}\end{aligned}

The normal to this plane

\begin{aligned}i{\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(13)

is directed along the line of intersection. This result is more appealing than the general $\mathbb{R}^{N}$ result of equation 12, not just because it is simpler, but also because it is a function of only the bivectors for the planes, without a requirement to find or calculate two specific independent direction vectors in one of the planes.

## Applying this result to $\mathbb{R}^{N}$

If you reject the component of $\mathbf{A}$ from $\mathbf{B}$ for two intersecting bivectors:

\begin{aligned}\text{Rej}_{\mathbf{A}}(\mathbf{B}) = \frac{1}{\mathbf{A}}{\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}\end{aligned}

the line of intersection remains the same … that operation rotates $\mathbf{B}$ so that the two are mutually perpendicular. This essentially reduces the problem to that of the three dimensional case, so the solution has to be of the same form… you just need to calculate a “pseudoscalar” (what you are calling the join), for the subspace spanned by the two bivectors.

That can be computed by taking any direction vector that is on one plane, but isn’t in the second. For example, pick a vector $\mathbf{u}$ in the plane $\mathbf{A}$ that is not on the intersection of $\mathbf{A}$ and $\mathbf{B}$. In mathese that is $\mathbf{u} = \frac{1}{\mathbf{A}}(\mathbf{A}\cdot \mathbf{u})$ (or $\mathbf{u} \wedge \mathbf{A} = 0$), where $\mathbf{u} \wedge \mathbf{B} \ne 0$. Thus a pseudoscalar for this subspace is:

\begin{aligned}\mathbf{i} = \frac{\mathbf{u} \wedge \mathbf{B}}{{\left\lvert{\mathbf{u} \wedge \mathbf{B}}\right\rvert}}\end{aligned}

To calculate the direction vector along the intersection we don’t care about the scaling above. Also note that provided $\mathbf{u}$ has a component in the plane $\mathbf{A}$, $\mathbf{u} \cdot \mathbf{A}$ is also in the plane (it’s rotated $\pi/2$ from $\frac{1}{\mathbf{A}}(\mathbf{A} \cdot \mathbf{u})$.

Thus, provided that $\mathbf{u} \cdot \mathbf{A}$ isn’t on the intersection, a scaled “pseudoscalar”
for the subspace can be calculated by taking from any vector $\mathbf{u}$ with a component in the plane $\mathbf{A}$:

\begin{aligned}\mathbf{i} \propto (\mathbf{u} \cdot \mathbf{A}) \wedge \mathbf{B}\end{aligned}

Thus a vector along the intersection is:

\begin{aligned}\mathbf{d} = ((\mathbf{u} \cdot \mathbf{A}) \wedge \mathbf{B}) {\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(14)

Interchange of $\mathbf{A}$ and $\mathbf{B}$ in either the trivector or bivector terms above would also work.

Without showing the steps one can write the complete parametric solution of the line through the planes of equations 9 in terms of this direction vector:

\begin{aligned}\mathbf{x} = \mathbf{p} + \left(\frac{(\mathbf{q} - \mathbf{p})\wedge \mathbf{B}}{(\mathbf{d} \cdot \mathbf{A}) \wedge \mathbf{B}}\right) (\mathbf{d} \cdot \mathbf{A}) + \alpha \mathbf{d}\end{aligned} \quad\quad\quad(15)

Since $(\mathbf{d} \cdot \mathbf{A}) \ne 0$ and $(\mathbf{d} \cdot \mathbf{A}) \wedge \mathbf{B} \ne 0$ (unless $\mathbf{A}$ and $\mathbf{B}$ are coplanar), observe that this is a natural generator of the pseudoscalar for the subspace, and as such shows up in the expression above.

Also observe the non-coincidental similarity of the $\mathbf{q}-\mathbf{p}$ term to Cramer’s rule (a ration of determinants).

# Components of a grade two multivector

The procedure to calculate projections and rejections of planes onto planes is similar to a vector projection onto a space.

To arrive at that result we can consider the product of a grade two multivector $\mathbf{A}$ with a bivector $\mathbf{B}$ and its inverse (
the restriction that $\mathbf{B}$ be a bivector, a grade two multivector that can be written as a wedge product of two vectors, is required for general invertability).

\begin{aligned}\mathbf{A}\frac{1}{\mathbf{B}}\mathbf{B} &= \left(\mathbf{A} \cdot \frac{1}{\mathbf{B}} + {\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} + \mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \mathbf{B} \\ &= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ &+{\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \cdot \mathbf{B} +{\left\langle{{ {\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \mathbf{B} }}\right\rangle}_{2}+{\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \wedge \mathbf{B} \\ &+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} +{\left\langle{{\mathbf{A} \wedge \frac{1}{\mathbf{B}} \mathbf{B}}}\right\rangle}_{4}+\mathbf{A} \wedge \frac{1}{\mathbf{B}} \wedge \mathbf{B} \\ \end{aligned}

Since $\frac{1}{\mathbf{B}} = -\frac{\mathbf{B}}{{{\left\lvert{\mathbf{B}}\right\rvert}}^2}$, this implies that the 6-grade term $\mathbf{A} \wedge \frac{1}{\mathbf{B}} \wedge \mathbf{B}$ is zero. Since the LHS has grade 2, this implies that the 0-grade and 4-grade terms are zero (also independently implies that the 6-grade term is zero). This leaves:

\begin{aligned}\mathbf{A}= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ +{\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2}+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} \end{aligned} \quad\quad\quad(16)

This could be written somewhat more symmetrically as

\begin{aligned}\mathbf{A}&=\sum_{i=0,2,4}{\left\langle{{{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{{i}} \mathbf{B}}}\right\rangle}_{2} \\ &= {\left\langle{{ \left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle \mathbf{B} +{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B} +{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{4} \mathbf{B} }}\right\rangle}_{2} \\ \end{aligned}

This is also a more direct way to derive the result in retrospect.

Looking at equation 16 we have three terms. The first is

\begin{aligned}\mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B}\end{aligned} \quad\quad\quad(17)

This is the component of $\mathbf{A}$ that lies in the plane $\mathbf{B}$ (the projection of $\mathbf{A}$ onto $\mathbf{B}$).

The next is

\begin{aligned}{\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(18)

If $\mathbf{B}$ and $\mathbf{A}$ have any intersecting components, this is the components of $\mathbf{A}$ from the intersection that are perpendicular to $\mathbf{B}$ with respect to the bivector dot product. ie: This is the rejective term.

And finally,

\begin{aligned}\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B}\end{aligned} \quad\quad\quad(19)

This is the remainder, the non-projective and non-coplanar terms. Greater than three dimensions is required to generate such a term. Example:

\begin{aligned}\mathbf{A} &= \mathbf{e}_{12} + \mathbf{e}_{23} + \mathbf{e}_{43} \\ \mathbf{B} &= \mathbf{e}_{34} \\ \end{aligned}

Product terms for these are:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} &= 1 \\ {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} &= \mathbf{e}_{24} \\ \mathbf{A} \wedge \mathbf{B} &= \mathbf{e}_{1234} \\ \end{aligned}

The decomposition is thus:

\begin{aligned}\mathbf{A} = \left(\mathbf{A} \cdot \mathbf{B} + {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + \mathbf{A} \wedge \mathbf{B}\right) \frac{1}{\mathbf{B}} = (1 + \mathbf{e}_{24} + \mathbf{e}_{1234}) \mathbf{e}_{43}\end{aligned}

## Closer look at the grade two term

The grade two term of equation 18 can be expanded using its antisymmetric bivector product representation

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}&= \frac{1}{{2}}\left(\mathbf{A}\frac{1}{\mathbf{B}} - \frac{1}{\mathbf{B}}\mathbf{A}\right) \mathbf{B} \\ &= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{\mathbf{B}}\mathbf{A} \mathbf{B}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{{\hat{\mathbf{B}}}}\mathbf{A} \hat{\mathbf{B}}\right) \\ \end{aligned}

Observe here one can restrict the examination to the case where $\mathbf{B}$ is a unit bivector without loss of generality.

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{i}}}}\right\rangle}_{2} \mathbf{i}&= \frac{1}{{2}}\left(\mathbf{A} + \mathbf{i}\mathbf{A}\mathbf{i}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} - \mathbf{i}^\dagger\mathbf{A}\mathbf{i}\right) \\ \end{aligned}

The second term is a rotation in the plane $\mathbf{i}$, by 180 degrees:

\begin{aligned}\mathbf{i}^\dagger\mathbf{A}\mathbf{i} = e^{-\mathbf{i} \pi/2}\mathbf{A} e^{\mathbf{i} \pi/2}\end{aligned}

So, any components of $\mathbf{A}$ that are completely in the plane cancel out (ie: the $\mathbf{A} \cdot \frac{1}{\mathbf{i}}\mathbf{i}$ component).

Also, if ${\left\langle{{\mathbf{A} \mathbf{i}}}\right\rangle}_{4} \ne 0$ then those components of $\mathbf{A} \mathbf{i}$ commute so

\begin{aligned}{\left\langle{{\mathbf{A} - \mathbf{i}^\dagger\mathbf{A}\mathbf{i}}}\right\rangle}_{4}&= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{{\mathbf{i}^\dagger\mathbf{A}\mathbf{i}}}\right\rangle}_{4} \\ &= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{{\mathbf{i}^\dagger\mathbf{i}\mathbf{A}}}\right\rangle}_{4} \\ &= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{\mathbf{A}}\right\rangle}_{4} \\ &= 0 \\ \end{aligned}

This implies that we have only grade two terms, and the final grade selection in equation 18 can be dropped:

\begin{aligned} {\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2} = {\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}\end{aligned} \quad\quad\quad(20)

It’s also possible to write this in a few alternate variations which are useful to list explicitly so that one can recognize them in other contexts:

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}&= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{\mathbf{B}}\mathbf{A}\mathbf{B}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} + \hat{\mathbf{B}}\mathbf{A}\hat{\mathbf{B}}\right) \\ &= \frac{1}{{2}}\left( \hat{\mathbf{B}}\mathbf{A} -\mathbf{A}\hat{\mathbf{B}} \right)\hat{\mathbf{B}} \\ &= {\left\langle{{\hat{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2}\hat{\mathbf{B}} \\ &= \hat{\mathbf{B}}{\left\langle{{\mathbf{A}\hat{\mathbf{B}}}}\right\rangle}_{2} \\ \end{aligned}

## Projection and Rejection

Equation 20 can be substituted back into equation 16 yielding:

\begin{aligned}\mathbf{A} =\mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ +{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} \end{aligned} \quad\quad\quad(21)

Now, for the special case where $\mathbf{A} \wedge \mathbf{B} = 0$ (all bivector components of the grade two multivector $\mathbf{A}$ have a common vector with bivector $\mathbf{B}$) we can write

\begin{aligned}\mathbf{A} &= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} +{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B} \\ &= \mathbf{B} \frac{1}{\mathbf{B}} \cdot {\mathbf{A}} + \mathbf{B} {\left\langle{{\frac{1}{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2} \\ \end{aligned}

This is

\begin{aligned}\mathbf{A} = \text{Proj}_{\mathbf{B}}(\mathbf{A}) + \text{Rej}_{\mathbf{B}}(\mathbf{A}) \end{aligned} \quad\quad\quad(22)

It’s worth verifying that these two terms are orthogonal (with respect to the grade two vector dot product)

\begin{aligned}\text{Proj}_{\mathbf{B}}(\mathbf{A}) \cdot \text{Rej}_{\mathbf{B}}(\mathbf{A})&= \left\langle{{ \text{Proj}_{\mathbf{B}}(\mathbf{A}) \text{Rej}_{\mathbf{B}}(\mathbf{A}) }}\right\rangle \\ &= \left\langle{{ \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \mathbf{B} {\left\langle{{\frac{1}{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2} }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ (\mathbf{A}\mathbf{B} + \mathbf{B}\mathbf{A})(\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B}) }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ \mathbf{A}\mathbf{B}\mathbf{B}\mathbf{A} -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} +\mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} -\mathbf{B}\mathbf{A}\mathbf{A}\mathbf{B} }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} +\mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} }}\right\rangle \\ \end{aligned}

Since we have introduced the restriction $\mathbf{A} \wedge \mathbf{B} \ne 0$, we can use the dot product to reorder product terms:

\begin{aligned}\mathbf{A}\mathbf{B} = -\mathbf{B}\mathbf{A} + 2 \mathbf{A} \cdot \mathbf{B}\end{aligned}

This can be used to reduce the grade zero term above:

\begin{aligned}\left\langle{{ \mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} }}\right\rangle&= \left\langle{{ \mathbf{B}\mathbf{A}(-\mathbf{A}\mathbf{B} + 2 \mathbf{A} \cdot \mathbf{B}) -(-\mathbf{B}\mathbf{A} + 2 \mathbf{A} \cdot \mathbf{B})\mathbf{A}\mathbf{B} }}\right\rangle \\ &= + 2 (\mathbf{A} \cdot \mathbf{B})\left\langle{{\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B} }}\right\rangle \\ &= + 4 (\mathbf{A} \cdot \mathbf{B})\left\langle{{{\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}}}\right\rangle \\ &= 0 \\ \end{aligned}

This proves orthogonality as expected.

## Grade two term as a generator of rotations.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{planerejection}
\caption{Bivector rejection. Perpendicular component of plane.}
\end{figure}

Figure \ref{fig:planerejection} illustrates how the grade 2 component of the bivector product acts as a rotation in the rejection operation.

Provided that $\mathbf{A}$ and $\mathbf{B}$ are not coplanar, ${\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}$ is a plane mutually perpendicular to both.

Given two mutually perpendicular unit bivectors ${\mathbf{A}}$ and ${\mathbf{B}}$, we can in fact write:

\begin{aligned}{\mathbf{B}} = {\mathbf{A}}{\left\langle{{\mathbf{B}{\mathbf{A}}}}\right\rangle}_{2}\end{aligned}

\begin{aligned}{\mathbf{B}} = {\left\langle{{\mathbf{A}{\mathbf{B}}}}\right\rangle}_{2}{\mathbf{A}}\end{aligned}

Compare this to a unit bivector for two mutually perpendicular vectors:

\begin{aligned}\mathbf{b} = \mathbf{a} (\mathbf{a} \wedge \mathbf{b})\end{aligned}

\begin{aligned}\mathbf{b} = (\mathbf{b} \wedge \mathbf{a}) \mathbf{a}\end{aligned}

In both cases, the unit bivector functions as an imaginary number, applying a rotation of $\pi/2$ rotating one of the perpendicular entities onto the other.

As with vectors one can split the rotation of the unit bivector into half angle left and right rotations. For example, for the same mutually perpendicular pair of bivectors one can write

\begin{aligned}\mathbf{B} &= \mathbf{A}{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2} \\ &= \mathbf{A} e^{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/2} \\ &= e^{-{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/4} \mathbf{A} e^{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/4} \\ &= \left(\frac{1}{{\sqrt{2}}}(1 - \mathbf{B} \mathbf{A})\right) \mathbf{A} \left(\frac{1}{{\sqrt{2}}}(1 + \mathbf{B} \mathbf{A}) \right) \\ \end{aligned}

Direct multiplication can be used to verify that this does in fact produce the desired result.

In general, writing

\begin{aligned}\mathbf{i} = \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}\end{aligned}

the rotation of plane $\mathbf{B}$ towards $\mathbf{A}$ by angle $\theta$ can be expressed with either a single sided full angle

\begin{aligned}\text{Rot}_{\theta: \mathbf{A} \rightarrow \mathbf{B}}(\mathbf{A}) &= \mathbf{A} e^{\mathbf{i} \theta} \\ &= e^{-\mathbf{i} \theta} \mathbf{A} \\ \end{aligned}

or double sided the half angle rotor formulas:

\begin{aligned}\text{Rot}_{\theta: \mathbf{A} \rightarrow \mathbf{B}}(\mathbf{A}) = e^{-\mathbf{i} \theta/2} \mathbf{A} e^{\mathbf{i} \theta/2} = \mathbf{R}^\dagger \mathbf{A} \mathbf{R}\end{aligned} \quad\quad\quad(23)

Where:

\begin{aligned}\mathbf{R} &= e^{\mathbf{i}\theta/2} \\ &= \cos(\theta/2) + \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}\sin(\theta/2) \\ \end{aligned}

As with half angle rotors applied to vectors, there are two possible orientations to rotate. Here the orientation of the rotation is such that the angle is measured along the minimal arc between the two, where the angle between the two is in the range $(0,\pi)$ as opposed to the $(\pi,2\pi)$ rotational direction.

## Angle between two intersecting planes.

Worth pointing out for comparison to the vector result, one can use the bivector dot product to calculate the angle between two intersecting planes. This angle of separation $\theta$ between the two can be expressed using the exponential:

\begin{aligned}\hat{\mathbf{B}} = \hat{\mathbf{A}} e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta}\end{aligned}

\begin{aligned}\implies-\hat{\mathbf{A}} \hat{\mathbf{B}} = e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta}\end{aligned}

Taking the grade zero terms of both sides we have:

\begin{aligned}-\left\langle{{\hat{\mathbf{A}} \hat{\mathbf{B}}}}\right\rangle = \left\langle{{ e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta} }}\right\rangle\end{aligned}

\begin{aligned}\implies\cos(\theta) = - \frac{\mathbf{A} \cdot \mathbf{B}}{{\left\lvert{\mathbf{A}}\right\rvert}{\left\lvert{\mathbf{B}}\right\rvert}}\end{aligned}

The sine can be obtained by selecting the grade two terms

\begin{aligned}-{\left\langle{{\hat{\mathbf{A}} \hat{\mathbf{B}}}}\right\rangle}_{2} = \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \sin(\theta)\end{aligned}

\begin{aligned}\implies\sin(\theta) = \frac{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}{ {\left\lvert{\mathbf{A}}\right\rvert}{\left\lvert{\mathbf{B}}\right\rvert} }\end{aligned}

Note that the strictly positive sine result here is consistent with the fact that the angle is being measured such that it is in the
$(0,\pi)$ range.

## Rotation of an arbitrarily oriented plane.

As stated in a few of the GA books the rotor equation is a rotation representation that works for all grade vectors. Let’s verify this for the bivector case. Given a plane through the origin spanned by two direction vectors and rotated about the origin in a plane specified by unit magnitude rotor $\mathbf{R}$, the rotated plane will be specified by the wedge of the rotations applied to the two direction vectors. Let

\begin{aligned}\mathbf{A} = \mathbf{u} \wedge \mathbf{v}\end{aligned}

Then,

\begin{aligned}R(\mathbf{A}) &= R(\mathbf{u}) \wedge R(\mathbf{v}) \\ &= (\mathbf{R}^\dagger \mathbf{u} \mathbf{R}) \wedge (\mathbf{R}^\dagger \mathbf{v} \mathbf{R}) \\ &= \frac{1}{{2}}( \mathbf{R}^\dagger \mathbf{u} \mathbf{R} \mathbf{R}^\dagger \mathbf{v} \mathbf{R} - \mathbf{R}^\dagger \mathbf{v} \mathbf{R} \mathbf{R}^\dagger \mathbf{u} \mathbf{R}) \\ &= \frac{1}{{2}}( \mathbf{R}^\dagger \mathbf{u} \mathbf{v} \mathbf{R} - \mathbf{R}^\dagger \mathbf{v} \mathbf{u} \mathbf{R}) \\ &= \mathbf{R}^\dagger \frac{\mathbf{u} \mathbf{v} - \mathbf{v} \mathbf{u}}{2} \mathbf{R} \\ &= \mathbf{R}^\dagger \mathbf{u} \wedge \mathbf{v} \mathbf{R} \\ &= \mathbf{R}^\dagger \mathbf{A} \mathbf{R} \\ \end{aligned}

Observe that with this half angle double sided rotation equation, any component of $\mathbf{A}$ in the plane of rotation, or any component that does not intersect the plane of rotation, will be unchanged by the rotor since it will commute with it. In those cases the opposing sign half angle rotations will cancel out. Only the components of the plane that are perpendicular to the rotational plane will be changed by this rotation operation.

# A couple of reduction formula equivalents from $\mathbb{R}^{3}$ vector geometry.

The reduction of the $\mathbb{R}^{3}$ dot of cross products to dot products can be naturally derived using GA arguments. Writing $i$ as the $\mathbb{R}^{3}$ pseudoscalar we have:

\begin{aligned}( \mathbf{a} \times \mathbf{b} ) \cdot ( \mathbf{c} \times \mathbf{d} )&= \frac{\mathbf{a} \wedge \mathbf{b}}{i} \cdot \frac{\mathbf{c} \wedge \mathbf{d}}{i} \\ &= \frac{1}{{2}}\left( \frac{\mathbf{a} \wedge \mathbf{b}}{i} \frac{\mathbf{c} \wedge \mathbf{d}}{i} + \frac{\mathbf{c} \wedge \mathbf{d}}{i} \frac{\mathbf{a} \wedge \mathbf{b}}{i} \right) \\ &= -\frac{1}{{2}}\left( (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{c} \wedge \mathbf{d}) (\mathbf{a} \wedge \mathbf{b}) \right) \\ &= - (\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d}) - (\mathbf{a} \wedge \mathbf{b}) \wedge (\mathbf{c} \wedge \mathbf{d})\end{aligned}

In $\mathbb{R}^{3}$ this last term must be zero, thus one can write

\begin{aligned}( \mathbf{a} \times \mathbf{b} ) \cdot ( \mathbf{c} \times \mathbf{d} ) = -(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d})\end{aligned} \quad\quad\quad(24)

This is now in a form where it can be reduced to products of vector dot products.

\begin{aligned}(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d})&= \frac{1}{{2}}\left\langle{{ (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{c} \wedge \mathbf{d}) (\mathbf{a} \wedge \mathbf{b}) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) (\mathbf{b} \wedge \mathbf{a}) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ (\mathbf{a}\mathbf{b} - \mathbf{a} \cdot \mathbf{b} ) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) (\mathbf{b} \mathbf{a} - \mathbf{b} \cdot \mathbf{a} ) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a}\mathbf{b} (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) \mathbf{b} \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} (\mathbf{b} \cdot (\mathbf{c} \wedge \mathbf{d}) + \mathbf{b} \wedge (\mathbf{c} \wedge \mathbf{d})) ( (\mathbf{d} \wedge \mathbf{c}) \cdot \mathbf{b} + (\mathbf{d} \wedge \mathbf{c}) \wedge \mathbf{b}) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} (\mathbf{b} \cdot (\mathbf{c} \wedge \mathbf{d})) + ( (\mathbf{d} \wedge \mathbf{c}) \cdot \mathbf{b} ) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} ( (\mathbf{b} \cdot \mathbf{c}) \mathbf{d} - (\mathbf{b} \cdot \mathbf{d}) \mathbf{c} ) + ( \mathbf{d} (\mathbf{c} \cdot \mathbf{b}) - \mathbf{c} (\mathbf{d} \cdot \mathbf{b}) ) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}( ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{b} \cdot \mathbf{d} ) ( \mathbf{a} \cdot \mathbf{c} ) + ( \mathbf{d} \cdot \mathbf{a} ) ( \mathbf{c} \cdot \mathbf{b} ) - ( \mathbf{c} \cdot \mathbf{a} ) ( \mathbf{d} \cdot \mathbf{b} ) ) \\ &= ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{d} ) \\ \end{aligned}

Summarizing with a comparison to the $\mathbb{R}^{3}$ relations we have:

\begin{aligned}(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d}) = -(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{d} )\end{aligned} \quad\quad\quad(25)

\begin{aligned}(\mathbf{a} \wedge \mathbf{c}) \cdot (\mathbf{b} \wedge \mathbf{c}) = -(\mathbf{a} \times \mathbf{c}) \cdot (\mathbf{b} \times \mathbf{c}) = ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{c} ) - \mathbf{c}^2 ( \mathbf{a} \cdot \mathbf{b} )\end{aligned} \quad\quad\quad(26)

The bivector relations hold for all of $\mathbb{R}^{N}$.

## Reader notes for Jackson 12.11, Retarded time solution to the wave equation.

Posted by peeterjoot on September 20, 2009

[Click here for a PDF of this sequence of posts with nicer formatting]

# Motivation

In [1] I blundered my way towards the retarded time Green’s function solution to the 3D wave equation. Jackson’s [2] (section 12.11) covers this in a much more coherent fashion. It is however somewhat terse, and some details that were not immediately obvious to me were omitted.

Here are my notes for this section in case I want to refer to it again later.

# Guts

The starting point is the electrodynamic wave equation

\begin{aligned}\partial_\alpha F^{\alpha\beta} = \frac{4 \pi}{c} J^\beta\end{aligned} \quad\quad\quad(1)

A substitution of $F^{\alpha \beta} = \partial^\alpha A^\beta - \partial^\beta A^\alpha$ gives us

\begin{aligned}\partial_\alpha F^{\alpha\beta} = \partial_\alpha \partial^\alpha A^\beta - \partial_\alpha \partial^\beta A^\alpha= \square A^\beta - \partial^\beta (\partial_\alpha A^\alpha)\end{aligned} \quad\quad\quad(2)

Thus with the Lorentz condition $\partial_\alpha A^\alpha = 0$ we have

\begin{aligned}\square A^\beta = \frac{4 \pi}{c} J^\beta\end{aligned} \quad\quad\quad(3)

A set of four non-homogeneous wave equations to solve. It is assumed that a Green’s function of the form

\begin{aligned}\square_x D(x - x') = \delta^4(x - x')\end{aligned} \quad\quad\quad(4)

can be found. Jackson states that this is possible in the absense of boundary surfaces, which seems to imply that the more general case would require $\square_x D(x, x') = \delta^4(x - x')$, where $D$ is not neccessarily a function of the four vector difference $x - x'$.

What is really meant by this Green’s function? It only takes meaning in the context of the convolution integral. Namely

\begin{aligned}A^\beta = \int d^4 x' D(x, x') \frac{4 \pi}{c} J^\beta(x') \end{aligned} \quad\quad\quad(5)

So that

\begin{aligned}\square_x A^\beta &= \int d^4 x' \square_x D(x, x') \frac{4 \pi}{c} J^\beta(x') \\ &= \frac{4 \pi}{c} \int d^4 x' \delta^4(x - x') J^\beta(x') \\ &= \frac{4 \pi}{c} J^\beta(x) \\ \end{aligned}

So if a function with this delta filtering property under the Delambertian can be found we can find the non-homogeneous solutions directly by four-volume convolution.

It is implied in the text (probably stated explicitly somewhere earlier) that the asymmetric convention for the Fourier transform pairs is being used

\begin{aligned}\tilde{f}(k) &= \int d^4 z f(z) e^{i k \cdot z} \\ f(z) &= \frac{1}{{(2\pi)^4}} \int d^4 k \tilde{f}(k) e^{-i k \cdot z} \end{aligned} \quad\quad\quad(6)

where $d^4 k = dk_0 dk_1 dk_2 dk_3$, and $d^4 z = dz^0 dz^1 dz^2 dz^3$, and $k \cdot z = k_\mu z^\mu = k^\mu z_\mu$.

Assuming the validity of this transform pair, even for the delta distribution, we can find an integral representation of the delta using the transform pairs. For the Fourier transform of delta we have

\begin{aligned}\tilde{\delta^4}(k) &= \int d^4 z \delta^4(z) e^{i k \cdot z} \\ &= e^{i k \cdot 0} \\ &= 1\end{aligned}

Performing the inverse transformation provides the delta function exponential integral representation

\begin{aligned}\delta^4(z) &= \frac{1}{{(2\pi)^4}} \int d^4 k \tilde{\delta^4}(k) e^{-i k \cdot z} \\ &= \frac{1}{{(2\pi)^4}} \int d^4 k e^{-i k \cdot z} \\ \end{aligned}

Just as a Fourier representation of the delta can be found, we can integrate by parts to find an integral representation of the Green’s function that we seek. Taking Fourier transforms

\begin{aligned}\mathcal{F}(\square_x D(z))(k) &= \int d^4 z \partial_\alpha \partial^\alpha D(z) e^{i k \cdot z} \\ &= -\int d^4 z \partial^\alpha D(z) \partial_\alpha e^{i k_\beta z^\beta} \\ &= -\int d^4 z \partial^\alpha D(z) i k_\alpha e^{i k_\beta z^\beta} \\ &= \int d^4 z D(z) i k_\alpha \partial^\alpha e^{i k^\beta z_\beta} \\ &= -\int d^4 z D(z) k_\alpha k^\alpha e^{i k \cdot z } \\ &= - k^2 \tilde{D}(k)\end{aligned}

Usign the assumed delta function property of this Green’s function we also have

\begin{aligned}\mathcal{F}(\square_x D(z))(k) &= \int d^4 z \delta^4(z) e^{i k \cdot z} \\ &= 1\end{aligned}

This completely specifies the Fourier transform of the Green’s function

\begin{aligned}\tilde{D}(k) &= - \frac{1}{{k^2}}\end{aligned} \quad\quad\quad(8)

and we can inverse transform to complete the task of finding an initial representation of the Green’s function itself. That is

\begin{aligned}D(z) = -\frac{1}{{(2\pi)^4}} \int d^4 k \frac{1}{{k^2}} e^{-i k \cdot z} \end{aligned} \quad\quad\quad(9)

With an explicit spacetime split we have our integral prepped for the contour integration

\begin{aligned}D(z) = -\frac{1}{{(2\pi)^4}} \int d^3 k e^{i \mathbf{k} \cdot \mathbf{z}} \int_{-\infty}^\infty dk_0 \frac{1}{{k_0^2 - \mathbf{k}^2}} e^{-i k_0 z_0} \end{aligned} \quad\quad\quad(10)

Here $\kappa = {\left\lvert{\mathbf{k}}\right\rvert}$ is used as in the text. If we let $k_0 = R e^{i\theta}$ take on complex values, integrating over a semicircular arc, we have for the exponential

\begin{aligned}{\left\lvert{e^{-i k_0 z_0}}\right\rvert}&= {\left\lvert{e^{-i R (\cos\theta + i \sin\theta) z_0} }\right\rvert} \\ &= {\left\lvert{e^{ z_0 R \sin\theta} e^{ -i z_0 R \cos\theta } }\right\rvert} \\ &= {\left\lvert{e^{ z_0 R \sin\theta} }\right\rvert}\end{aligned}

In the upper half plane $\theta \in [0,\pi]$, so $\sin\theta$ is never negative, and the integral on an upper half plane semi-circular contour can only vanish as desired for $z_0 0$ for a lower half plane contour. This is mentioned in the text but I felt it more clear just writing out the exponential as above.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{retardedContourBoth}
\caption{Contours strictly above the $k_0 = 0$ axis}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{retardedContourAroundPole}
\caption{Contour around pole}
\end{figure}

Having established the value on the loop at infinity we can now integrate over the contour $r_1$ as depicted in figure (\ref{fig:jacksonRet:retardedContourBoth}). The problem is mainly reduced to an integral of the form figure (\ref{fig:jacksonRet:retardedContourAroundPole}) around the simple poles at $\alpha = \pm \kappa$

\begin{aligned}I_\alpha = \oint \frac{f(z)}{z - \alpha} dz\end{aligned} \quad\quad\quad(11)

With $z = \alpha + R e^{i\theta}$, and $\theta \in [\pi/2, 5\pi/2]$, we have

\begin{aligned}I_\alpha = \int \frac{f(z)}{R e^{i\theta}} R i e^{i\theta} d\theta\end{aligned} \quad\quad\quad(12)

with $R \rightarrow 0$, we are left with

\begin{aligned}I_\alpha = 2 \pi i f(\alpha)\end{aligned} \quad\quad\quad(13)

There are six arcs on the contour of interest. For the first two around the poles lets lable the integral contributions $I_\kappa$ and $I_{-\kappa}$. Along the infinite semicircular contour the integral vanishes with the right sign choice for $z_0$. For the remainder lets write the integral contributions $I$.

Summing over the complete contour, specially chosen to enclose no poles, we have

\begin{aligned}I + I_\kappa + I_{-\kappa} + 0 = 0 \end{aligned} \quad\quad\quad(14)

For this $z_0 > 0$ integral we are left with the residue sum

\begin{aligned}\int_{-\infty}^\infty dk_0 \frac{1}{{k_0^2 - \mathbf{k}^2}} e^{-i k_0 z_0} &= - 2 \pi i \left( {\left. \frac{1}{{k_0 - \kappa}} e^{-i k_0 z_0} \right\vert}_{k_0 = -\kappa}+{\left. \frac{1}{{k_0 + \kappa}} e^{-i k_0 z_0} \right\vert}_{k_0 = \kappa}\right) \\ &= \frac{2 \pi i^2}{\kappa} \sin(\kappa z_0)\end{aligned}

Since I can never remember the signs and integral orientations for the residue formula so I’ve always done it “manually” as above picking a zero valued contour.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{retardedContourOnAxis}
\caption{Contour exactly on the $k_0 = 0$ axis?}
\end{figure}

Now, the issue of where to place the contour wasn’t really discussed mathematically. Physically this makes the difference between causal and acausal behaviour, but why put the contour strictly above or below the axis and not right on it. If we put the contour exactly on the $k_0 = 0$ axis as in (\ref{fig:jacksonRet:retardedContourOnAxis}), then our integrals around the two half circular poles gives us a result off by a factor of two? There is also an (implied) limiting procedure required to place the contour strictly above the axis, and the details of this aren’t mentioned (and I also haven’t thought them through). Some of these would be worth thinking through in more detail, but for now lets ignore these. We are left with

\begin{aligned}D(z) = \frac{\theta(z_0)}{(2\pi)^3} \int d^3 k e^{i \mathbf{k} \cdot \mathbf{z}} \frac{1}{{\kappa}} \sin(\kappa z_0)\end{aligned} \quad\quad\quad(15)

How to reduce this to the single variable integral in $\kappa$ was not immediately clear to me. Aligning $\mathbf{z}$ with the $\mathbf{e}_3$ axis, and using a spherical polar representation for $\mathbf{k}$ we can write $\mathbf{z} \cdot \mathbf{k} = R \kappa \cos\theta$. With this and the volume element $d^3 k = \kappa^2 \sin\theta d\theta d\phi d\kappa$, we have

\begin{aligned}D(z) = \frac{\theta(z_0)}{(2\pi)^3} \int_0^\infty d\kappa \sin(\kappa z_0) \int_0^{2\pi} d\phi \int_0^\pi d\theta \kappa e^{i R \kappa \cos\theta} \sin\theta\end{aligned} \quad\quad\quad(16)

This now happily submits to a nice variable substitution, unlike an integral like $\int e^{i \mu \cos\theta} d\theta = J_0({\left\lvert{\mu}\right\rvert})$ which can be evaluated, but only in terms of Bessel functions or messy series expansion. Writing $\tau = \kappa \cos\theta$, and $-d\tau = \kappa \sin\theta d\theta$ we have

\begin{aligned}\int_0^\pi d\theta \kappa e^{i R \kappa \cos\theta} \sin\theta&=-\int_{\kappa}^{-\kappa} d\tau e^{i R \tau} \\ &=\frac{e^{i R \kappa}}{i R} -\frac{e^{-i R \kappa}}{i R} \\ &=2 \frac{1}{{R}} \sin(R \kappa)\end{aligned}

Our Green’s function is now reduced to

\begin{aligned}D(z) = \frac{\theta(z_0)}{2 \pi^2 R} \int_0^\infty d\kappa \sin(\kappa z_0) \sin(\kappa R)\end{aligned} \quad\quad\quad(17)

Expanding out these sines in terms of exponentials we have

\begin{aligned}D(z) &= -\frac{\theta(z_0)}{8 \pi^2 R} \int_0^\infty d\kappa ( e^{i\kappa(z_0+R)} + e^{-i\kappa(z_0+R)} -e^{i\kappa(R-z_0)} - e^{i\kappa(z_0-R)} ) \\ &= -\frac{\theta(z_0)}{8 \pi^2 R} \left(\int_0^\infty d\kappa \left( e^{i\kappa(z_0+R)} -e^{i\kappa(R-z_0)} \right) +\int_0^{-\infty} -d\kappa \left( e^{i\kappa(z_0+R)} - e^{-i\kappa(z_0-R)} \right) \right)\\ &= \frac{\theta(z_0)}{8 \pi^2 R} \int_{-\infty}^\infty d\kappa \left( e^{i\kappa(R-z_0)} -e^{i\kappa(z_0+R)} \right) \\ \end{aligned}

the sign in this first exponential differs from what Jackson obtained but it won’t change the end result. Did I make a mistake or did he? Wonder what the third edition shows? Using $\delta(x) = \int e^{-ikx} dk/2\pi$ we have

\begin{aligned}D(z) = \frac{\theta(z_0)}{4 \pi R} \left( \delta(z_0 -R) - \delta(-(z_0 + R)) \right)\end{aligned} \quad\quad\quad(18)

With $R = {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} \ge 0$, and $z_0 = c(t - t') > 0$, this second delta cannot contribute, and we are left with the retarded Green’s function

\begin{aligned}D_r(z) = \frac{\theta(z_0)}{4 \pi R} \delta(c(t -t') - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}) \end{aligned} \quad\quad\quad(19)

Very slick. I like the procedure, despite a few magic steps (like the choice to offset the contour).

# References

[1] Peeter Joot. {Poisson and retarded Potential Green’s functions from Fourier kernels} [online]. http://sites.google.com/site/peeterjoot/math2009/poisson.pdf.

[2] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

## math chat page.

Posted by peeterjoot on September 16, 2009

For random followup to email where the $latex …$ wordpress capability will be helpful. No content here, just comments.

Posted in Incoherent ramblings | 31 Comments »