Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Bivector grades of the squared angular momentum operator.

Posted by peeterjoot on September 6, 2009

[Click here for a PDF of this sequence of posts with nicer formatting]


The aim here is to extract the bivector grades of the squared angular momentum operator

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2} \stackrel{?}{=} \cdots \end{aligned} \quad\quad\quad(1)

I’d tried this before and believe gotten it wrong. Take it super slow and dumb and careful.

Non-operator expansion.

Suppose P is a bivector, P = (\gamma^k \wedge \gamma^m) P_{km}, the grade two product with a different unit bivector is

\begin{aligned}{\left\langle{{ (\gamma_a \wedge \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} &= {\left\langle{{ (\gamma_a \gamma_b - \gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ \gamma_a (\gamma_b \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} + {\left\langle{{ \gamma_a (\gamma_b \wedge (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &= (\gamma_a \wedge \gamma^m) P_{b m} -(\gamma_a \wedge \gamma^k) P_{k b} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &+ (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} - (\gamma_b \wedge \gamma^m) P_{a m} + (\gamma_b \wedge \gamma^k) P_{k a} \\ &= (\gamma_a \wedge \gamma^c) (P_{b c} -P_{c b})+ (\gamma_b \wedge \gamma^c) (P_{c a} -P_{a c} ) \end{aligned}

This same procedure will be used for the operator square, but we have the complexity of having the second angular momentum operator change the first bivector result.

Operator expansion.

In the first few lines of the bivector product expansion above, a blind replacement \gamma_a \rightarrow x, and \gamma_b \rightarrow \nabla gives us

\begin{aligned}{\left\langle{{ (x \wedge \nabla) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} &= {\left\langle{{ (x \nabla - x \cdot \nabla) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ x (\nabla \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} + {\left\langle{{ x (\nabla \wedge (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} - (x \cdot \nabla) (\gamma^k \wedge \gamma^m) P_{km} \end{aligned}

Using P_{km} = x_k \partial_m, eliminating the coordinate expansion we have an intermediate result that gets us partway to the desired result

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&={\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} + {\left\langle{{ x (\nabla \wedge (x \wedge \nabla)) }}\right\rangle}_{2} - (x \cdot \nabla) (x \wedge \nabla)  \end{aligned} \quad\quad\quad(2)

An expansion of the first term should be easier than the second. Dropping back to coordinates we have

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &={\left\langle{{ x (\nabla \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} x_k \partial_m \\ &={\left\langle{{ x (\gamma_a \partial^a \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} x_k \partial_m \\ &={\left\langle{{ x \gamma^m \partial^k }}\right\rangle}_{2} x_k \partial_m -{\left\langle{{ x \gamma^k \partial^m }}\right\rangle}_{2} x_k \partial_m  \\ &=x \wedge (\partial^k x_k \gamma^m \partial_m )- x \wedge (\partial^m \gamma^k x_k \partial_m ) \end{aligned}

Okay, a bit closer. Backpedaling with the reinsertion of the complete vector quantities we have

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &= x \wedge (\partial^k x_k \nabla ) - x \wedge (\partial^m x \partial_m )  \end{aligned} \quad\quad\quad(3)

Expanding out these two will be conceptually easier if the functional operation is made explicit. For the first

\begin{aligned}x \wedge (\partial^k x_k \nabla ) \phi&=x \wedge x_k \partial^k (\nabla \phi)+x \wedge ((\partial^k x_k) \nabla) \phi \\ &=x \wedge ((x \cdot \nabla) (\nabla \phi))+ n (x \wedge \nabla) \phi \end{aligned}

In operator form this is

\begin{aligned}x \wedge (\partial^k x_k \nabla ) &= n (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla )  \end{aligned} \quad\quad\quad(4)

Now consider the second half of (3). For that we expand

\begin{aligned}x \wedge (\partial^m x \partial_m ) \phi&=x \wedge (x \partial_m \partial^m \phi)+ x \wedge ((\partial^m x) \partial_m \phi) \end{aligned}

Since x \wedge x = 0, and \partial^m x = \partial^m x_k \gamma^k = \gamma^m, we have

\begin{aligned}x \wedge (\partial^m x \partial_m ) \phi&=x \wedge (\gamma^m \partial_m ) \phi \\ &=(x \wedge \nabla) \phi \end{aligned}

Putting things back together we have for (3)

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &= (n-1) (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla )  \end{aligned} \quad\quad\quad(5)

This now completes a fair amount of the bivector selection, and a substitution back into (2) yields

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&=(n-1 - x \cdot \nabla) (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla ) + x \cdot (\nabla \wedge (x \wedge \nabla))  \end{aligned} \quad\quad\quad(6)

The remaining task is to explicitly expand the last vector-trivector dot product. To do that we use the basic alternation expansion identity

\begin{aligned}a \cdot (b \wedge c \wedge d)&= (a \cdot b) (c \wedge d)-(a \cdot c) (b \wedge d)+(a \cdot d) (b \wedge c) \end{aligned} \quad\quad\quad(7)

To see how to apply this to the operator case lets write that explicitly but temporarily in coordinates

\begin{aligned}x \cdot ((\nabla \wedge (x \wedge \nabla)) \phi&=(x^\mu \gamma_\mu) \cdot ((\gamma^\nu \partial_\nu ) \wedge (x_\alpha \gamma^\alpha \wedge (\gamma^\beta \partial_\beta))) \phi \\ &=x \cdot \nabla (x \wedge \nabla) \phi-x \cdot \gamma^\alpha \nabla \wedge x_\alpha \nabla \phi+x^\mu \nabla \wedge x \gamma_\mu \cdot \gamma^\beta \partial_\beta  \phi \\ &=x \cdot \nabla (x \wedge \nabla) \phi-x^\alpha \nabla \wedge x_\alpha \nabla \phi+x^\mu \nabla \wedge x \partial_\mu  \phi \end{aligned}

Considering this term by term starting with the second one we have

\begin{aligned}x^\alpha \nabla \wedge x_\alpha \nabla \phi&=x_\alpha (\gamma^\mu \partial_\mu) \wedge x^\alpha \nabla \phi \\ &=x_\alpha \gamma^\mu \wedge (\partial_\mu x^\alpha) \nabla \phi +x_\alpha \gamma^\mu \wedge x^\alpha \partial_\mu \nabla \phi  \\ &=x_\mu \gamma^\mu \wedge \nabla \phi +x_\alpha x^\alpha \gamma^\mu \wedge \partial_\mu \nabla \phi  \\ &=x \wedge \nabla \phi +x^2 \nabla \wedge \nabla \phi \end{aligned}

The curl of a gradient is zero, since summing over an product of antisymmetric and symmetric indexes \gamma^\mu \wedge \gamma^\nu \partial_{\mu\nu} is zero. Only one term remains to evaluate in the vector-trivector dot product now

\begin{aligned}x \cdot (\nabla \wedge x \wedge \nabla) &=(-1 + x \cdot \nabla )(x \wedge \nabla) +x^\mu \nabla \wedge x \partial_\mu   \end{aligned} \quad\quad\quad(8)

Again, a completely dumb and brute force expansion of this is

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi&=x^\mu (\gamma^\nu \partial_\nu) \wedge (x^\alpha \gamma_\alpha) \partial_\mu \phi \\ &=x^\mu \gamma^\nu \wedge (\partial_\nu (x^\alpha \gamma_\alpha)) \partial_\mu \phi +x^\mu \gamma^\nu \wedge (x^\alpha \gamma_\alpha) \partial_\nu \partial_\mu \phi \\ &=x^\mu (\gamma^\alpha \wedge \gamma_\alpha) \partial_\mu \phi +x^\mu \gamma^\nu \wedge x \partial_\nu \partial_\mu \phi \end{aligned}

With \gamma^\mu = \pm \gamma_\mu, the wedge in the first term is zero, leaving

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi&=-x^\mu x \wedge \gamma^\nu \partial_\nu \partial_\mu \phi \\ &=-x^\mu x \wedge \gamma^\nu \partial_\mu \partial_\nu \phi \\ &=-x \wedge x^\mu \partial_\mu \gamma^\nu \partial_\nu \phi \end{aligned}

In vector form we have finally

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi &= -x \wedge (x \cdot \nabla) \nabla \phi  \end{aligned} \quad\quad\quad(9)

The final expansion of the vector-trivector dot product is now

\begin{aligned}x \cdot (\nabla \wedge x \wedge \nabla) &=(-1 + x \cdot \nabla )(x \wedge \nabla) -x \wedge (x \cdot \nabla) \nabla \phi  \end{aligned} \quad\quad\quad(10)

This was the last piece we needed for the bivector grade selection. Incorporating this into (6), both the x \cdot \nabla x \wedge \nabla, and the x \wedge (x \cdot \nabla) \nabla terms cancel leaving the surprising simple result

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&=(n-2) (x \wedge \nabla)  \end{aligned} \quad\quad\quad(11)

The power of this result is that it allows us to write the scalar angular momentum operator from the Laplacian as

\begin{aligned}\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle &= (x \wedge \nabla)^2 - {\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2} - (x \wedge \nabla) \wedge (x \wedge \nabla) \\ &= (x \wedge \nabla)^2 - (n-2) (x \wedge \nabla) - (x \wedge \nabla) \wedge (x \wedge \nabla) \\ &= (-(n-2) + (x \wedge \nabla) - (x \wedge \nabla) \wedge ) (x \wedge \nabla)  \end{aligned}

The complete Laplacian is

\begin{aligned}\nabla^2 &= \frac{1}{{x^2}} (x \cdot \nabla)^2 + (n - 2) \frac{1}{{x}} \cdot \nabla - \frac{1}{{x^2}} \left((x \wedge \nabla)^2 - (n-2) (x \wedge \nabla) - (x \wedge \nabla) \wedge (x \wedge \nabla) \right) \end{aligned} \quad\quad\quad(12)

In particular in less than four dimensions the quad-vector term is necessarily zero. The 3D Laplacian becomes

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} (1 + \mathbf{x} \cdot \boldsymbol{\nabla})(\mathbf{x} \cdot \boldsymbol{\nabla})+ \frac{1}{{\mathbf{x}^2}} (1 - \mathbf{x} \wedge \boldsymbol{\nabla}) (\mathbf{x} \wedge \boldsymbol{\nabla})  \end{aligned} \quad\quad\quad(13)

So any eigenfunction of the bivector angular momentum operator \mathbf{x} \wedge \boldsymbol{\nabla} is necessarily a simultaneous eigenfunction of the scalar operator.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: