• 362,569

# Posts Tagged ‘commutator’

## PHY354H1S. Advanced Classical Mechanics (Taught by Prof. Erich Poppitz). Phase Space and Trajectories.

Posted by peeterjoot on March 5, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Phase space and phase trajectories.

The phase space and phase trajectories are the space of $p$‘s and $q$‘s of a mechanical system (always even dimensional, with as many $p$‘s as $q$‘s for N particles in 3d: 6N dimensional space).

The state of a mechanical system $\equiv$ the point in phase space.
Time evolution $\equiv$ a curve in phase space.

Example: 1 dim system, say a harmonic oscillator.

\begin{aligned}H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 q^2\end{aligned} \hspace{\stretch{1}}(1.1)

Our phase space can be illustrated as an ellipse as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig1}
\caption{Harmonic oscillator phase space trajectory.}
\end{figure}

where the phase space trajectories of the SHO. The equation describing the ellipse is

\begin{aligned}E = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 q^2,\end{aligned} \hspace{\stretch{1}}(1.2)

which we can put into standard elliptical form as

\begin{aligned}1 = \left( \frac{p}{\sqrt{2 m E}}\right)^2 + \left(\sqrt{\frac{m}{2 E}} \omega\right) q^2\end{aligned} \hspace{\stretch{1}}(1.3)

## Applications of $H$.

\begin{itemize}
\item Classical stat mech.
\item transition into QM via Poisson brackets.
\item mathematical theorems about phase space “flow”.
\item perturbation theory.
\end{itemize}

## Poisson brackets.

Poisson brackets arises very naturally if one asks about the time evolution of a function $f(p, q, t)$ on phase space.

\begin{aligned}\frac{d{{}}}{dt} f(p_i, q_i, t)&=\sum_i \frac{\partial {f}}{\partial {p_i}} \frac{\partial {p_i}}{\partial {t}}+ \frac{\partial {f}}{\partial {q_i}} \frac{\partial {q_i}}{\partial {t}}+ \frac{\partial {f}}{\partial {t}} \\ &=\sum_i- \frac{\partial {f}}{\partial {p_i}} \frac{\partial {H}}{\partial {q_i}}+ \frac{\partial {f}}{\partial {q_i}} \frac{\partial {H}}{\partial {p_i}}+ \frac{\partial {f}}{\partial {t}}\end{aligned}

Define the commutator of $H$ and $f$ as

\begin{aligned}\left[{H},{f}\right] =\sum_i\frac{\partial {H}}{\partial {p_i}}\frac{\partial {f}}{\partial {q_i}}-\frac{\partial {H}}{\partial {q_i}}\frac{\partial {f}}{\partial {p_i}}\end{aligned} \hspace{\stretch{1}}(1.4)

This is the Poisson bracket of $H(p,q,t)$ with $f(p,q,t)$, defined for arbitrary functions on phase space.

Note that other conventions for sign exist (apparently in Landau and Lifshitz uses the opposite).

So we have

\begin{aligned}\frac{d{{}}}{dt} f(p_i, q_i, t) = \left[{H},{f}\right] + \frac{\partial {f}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.5)

Corollaries:

If $f$ has no explicit time dependence ${\partial {f}}/{\partial {t}} = 0$ and if $\left[{H},{f}\right] = 0$, then $f$ is an integral of motion.

In QM conserved quantities are the ones that commute with the Hamiltonian operator.

To see the analogy better, recall def of Poisson bracket

\begin{aligned}\left[{f},{g}\right] =\sum_i\frac{\partial {f}}{\partial {p_i}}\frac{\partial {g}}{\partial {q_i}}-\frac{\partial {f}}{\partial {q_i}}\frac{\partial {g}}{\partial {p_i}}\end{aligned} \hspace{\stretch{1}}(1.6)

Properties of Poisson bracket

\begin{itemize}
\item antisymmetric

\begin{aligned}\left[{f},{g}\right] = -\left[{g},{f}\right].\end{aligned} \hspace{\stretch{1}}(1.7)

\item linear

\begin{aligned}\left[{a f + b h},{g}\right] &= a \left[{f},{g}\right] + b\left[{h},{g}\right] \\ \left[{g},{a f + b h}\right] &= a \left[{g},{f}\right] + b\left[{g},{h}\right].\end{aligned} \hspace{\stretch{1}}(1.8)

\end{itemize}

### Example. Compute $p$, $q$ commutators.

\begin{aligned}\left[{p_i},{p_j}\right] &= \sum_k\frac{\partial {p_i}}{\partial {p_k}}\not{{\frac{\partial {p_j}}{\partial {q_k}}}}-\not{{\frac{\partial {p_i}}{\partial {q_k}}}}\frac{\partial {p_j}}{\partial {p_k}} \\ &= 0\end{aligned}

So

\begin{aligned}\left[{p_i},{p_j}\right] = 0\end{aligned} \hspace{\stretch{1}}(1.10)

Similarly $\left[{q_i},{q_j}\right] = 0$.

\begin{aligned}\left[{q_i},{p_j}\right] &= \sum_k\not{{\frac{\partial {q_i}}{\partial {p_k}}}}\not{{\frac{\partial {p_j}}{\partial {q_k}}}}-\frac{\partial {q_i}}{\partial {q_k}}\frac{\partial {p_j}}{\partial {p_k}} \\ &= -\sum_k\delta_{ik}\delta_{jk} \\ &=-\delta_{ij}\end{aligned}

So

\begin{aligned}\left[{q_i},{p_j}\right] = -\delta_{ij}.\end{aligned} \hspace{\stretch{1}}(1.11)

This provides a systematic (axiomatic) way to “quantize” a classical mechanics system, where we make replacements

\begin{aligned}q_i &\rightarrow \hat{q}_i \\ p_i &\rightarrow \hat{p}_i,\end{aligned} \hspace{\stretch{1}}(1.12)

and

\begin{aligned}\left[{q_i},{p_j}\right] = -\delta_{ij} &\rightarrow \left[{q_i},{p_j}\right] = i \hbar \delta_{ij} \\ H(p, q, t) &\rightarrow \hat{H}(\hat{p}, \hat{q}, t).\end{aligned} \hspace{\stretch{1}}(1.14)

So

\begin{aligned}\frac{\left[{\hat{q}_i},{\hat{p}_j}\right]}{-i \hbar } = - \delta_{ij}\end{aligned} \hspace{\stretch{1}}(1.16)

Our quantization of time evolution is therefore

\begin{aligned}\frac{d{{}}}{dt} \hat{q}_i &= \frac{1}{{-i\hbar}} \left[{\hat{H}},{\hat{q}_i}\right] \\ \frac{d{{}}}{dt} \hat{p}_i &= \frac{1}{{-i\hbar}} \left[{\hat{H}},{\hat{p}_i}\right].\end{aligned} \hspace{\stretch{1}}(1.17)

These are the Heisenberg equations of motion in QM.

### Conserved quantities.

For conserved quantities $f$, functions of $p$‘s $q$‘s, we have

\begin{aligned}\left[{f},{H}\right] = 0\end{aligned} \hspace{\stretch{1}}(1.19)

Considering the components $M_i$, where

\begin{aligned}\mathbf{M} = \mathbf{r} \times \mathbf{p},\end{aligned} \hspace{\stretch{1}}(1.20)

We can show (3.27) that our Poisson brackets obey

\begin{aligned}\left[{M_x},{M_y}\right] &= -M_z \\ \left[{M_y},{M_z}\right] &= -M_x \\ \left[{M_z},{M_x}\right] &= -M_y\end{aligned} \hspace{\stretch{1}}(1.21)

(Prof Poppitz wasn’t sure of the sign of this and the particular bracket sign convention he happened to be using, but it appears he had it right).

These are the analogue of the momentum commutator relationships from QM right here in classical mechanics.

Considering the symmetries that lead to this conservation relationship, it is actually possible to show that rotations in 4D space lead to these symmetries and the conservation of the Runge-Lenz vector.

# Adiabatic changes in phase space and conserved quantities.

In figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig2}) where we have

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig2}
\caption{Variable length pendulum.}
\end{figure}

\begin{aligned}T = \frac{2 \pi}{\omega(t)} = \sqrt{\frac{l(t)}{g}}.\end{aligned} \hspace{\stretch{1}}(2.24)

Imagine that we change the length $l(t)$ very slowly so that

\begin{aligned}T \frac{1}{{l}} \frac{d{{l}}}{dt} \ll 1\end{aligned} \hspace{\stretch{1}}(2.25)

where $T$ is the period of oscillation. This is what’s called an adiabatic change, where the change of $\omega$ is small over a period.

It turns out that if this rate of change is slow, then there is actually an invariant, and

\begin{aligned}\frac{E}{\omega},\end{aligned} \hspace{\stretch{1}}(2.26)

is the so-called “adiabatic invariant”. There’s an important application to this (and some relations to QM). Imagine that we have a particle bounded by two walls, where the walls are moved very slowly as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig3}
\caption{Particle constrained by slowly moving walls.}
\end{figure}

This can be used to derive the adiabatic equation for an ideal gas (also using the equipartition theorem).

# Appendix I. Poisson brackets of angular momentum.

Let’s verify the angular momentum relations of 1.21 above (summation over $k$ implied):

\begin{aligned}\left[{M_i},{M_j}\right] &=\frac{\partial {M_i}}{\partial {p_k}}\frac{\partial {M_j}}{\partial {x_k}}-\frac{\partial {M_i}}{\partial {x_k}}\frac{\partial {M_j}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}\frac{\partial {x_a p_b}}{\partial {p_k}}\frac{\partial {x_r p_s}}{\partial {x_k}}-\epsilon_{a b i} \epsilon_{r s j}\frac{\partial {x_a p_b}}{\partial {x_k}}\frac{\partial {x_r p_s}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a \frac{\partial {p_b}}{\partial {p_k}}p_s \frac{\partial {x_r}}{\partial {x_k}}-\epsilon_{a b i} \epsilon_{r s j}p_b \frac{\partial {x_a}}{\partial {x_k}}x_r \frac{\partial {p_s}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a \delta_{k b}p_s \delta_{k r}-\epsilon_{a b i} \epsilon_{r s j}p_b \delta_{k a}x_r \delta_{s k} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a p_s \delta_{b r}-\epsilon_{a b i} \epsilon_{r s j}p_b x_r \delta_{a s} \\ &=\epsilon_{a r i}\epsilon_{r s j}x_a p_s -\epsilon_{s b i} \epsilon_{r s j}p_b x_r \\ &=-\delta_{a i}^{[s j]}x_a p_s -\delta_{b i}^{[j r]}p_b x_r \\ &=-\left(\delta_{a s}\delta_{i j}-\delta_{a j}\delta_{i s}\right)x_a p_s -\left(\delta_{b j}\delta_{i r}-\delta_{b r}\delta_{i j}\right)p_b x_r \\ &=-\delta_{a s}\delta_{i j}x_a p_s +\delta_{a j}\delta_{i s}x_a p_s -\delta_{b j}\delta_{i r}p_b x_r +\delta_{b r}\delta_{i j}p_b x_r \\ &=-\not{{x_s p_s \delta_{i j}}}+x_j p_i -p_j x_i +\not{{p_b x_b \delta_{i j}}}\\ \end{aligned}

So, as claimed, if $i \ne j \ne k$ we have

\begin{aligned}\left[{M_i},{M_j}\right] = -M_k.\end{aligned} \hspace{\stretch{1}}(3.27)

# Appendix II. EOM for the variable length pendulum.

Since we’ve referred to a variable length pendulum above, let’s recall what form the EOM for this system take. With cylindrical coordinates as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig4}), and a spring constant $\omega_0^2 = k/m$ our Lagrangian is

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig4}
\caption{phaseSpaceAndTrajectoriesFig4}
\end{figure}

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) - \frac{1}{{2}} m \omega_0^2 r^2 - m g r( 1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.28)

The EOM follows immediately

\begin{aligned}P_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m r^2 \dot{\theta} \\ P_r &= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}}} = m \dot{r} \\ \frac{d{{P_\theta}}}{dt} &= \frac{\partial {\mathcal{L}}}{\partial {\theta}} = -m g r \sin\theta \\ \frac{d{{P_r}}}{dt} &= \frac{\partial {\mathcal{L}}}{\partial {r}} = m r \dot{\theta}^2 - m \omega_0^2 r - m g (1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.29)

Or

\begin{aligned}\frac{d{{}}}{dt} \left( r^2 \dot{\theta} \right) &= - g r \sin\theta \\ \frac{d{{}}}{dt} \left( \dot{r} \right) &= r \left( \dot{\theta}^2 - \omega_0^2 \right) - g (1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.33)

Even in the small angle limit this isn’t a terribly friendly looking system

\begin{aligned}r \dot{d}{\theta} + 2 \dot{\theta} \dot{r} + g \theta &= 0 \\ \dot{d}{r} - r \dot{\theta}^2 + r \omega_0^2 &= 0.\end{aligned} \hspace{\stretch{1}}(4.35)

However, in the first equation of this system

\begin{aligned}\dot{d}{\theta} + 2 \dot{\theta} \frac{\dot{r}}{r} + \frac{1}{{r}} g \theta = 0,\end{aligned} \hspace{\stretch{1}}(4.37)

we do see the $\dot{r}/r$ dependence mentioned in class, and see how this being small will still result in something that approximately has the form of a SHO.

## PHY456H1F: Quantum Mechanics II. Lecture 19 (Taught by Prof J.E. Sipe). Rotations of operators.

Posted by peeterjoot on November 16, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Rotations of operators.

Rotating with $U[M]$ as in figure (\ref{fig:qmTwoL19:qmTwoL19fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL19fig1}
\caption{Rotating a state centered at $F$}
\end{figure}

\begin{aligned}\tilde{r}_i = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}{\left\langle {\psi} \right\rvert} R_i {\left\lvert {\psi} \right\rangle} = \bar{r}_i\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}{\left\langle {\psi} \right\rvert} U^\dagger[M] R_i U[M] {\left\lvert {\psi} \right\rangle}&= \tilde{r}_i = \sum_j M_{ij} \bar{r}_j \\ &={\left\langle {\psi} \right\rvert} \Bigl( U^\dagger[M] R_i U[M] \Bigr) {\left\lvert {\psi} \right\rangle}\end{aligned}

So

\begin{aligned}U^\dagger[M] R_i U[M] = \sum_j M_{ij} R_j\end{aligned} \hspace{\stretch{1}}(2.3)

Any three operators $V_x, V_y, V_z$ that transform according to

\begin{aligned}U^\dagger[M] V_i U[M] = \sum_j M_{ij} V_j\end{aligned} \hspace{\stretch{1}}(2.4)

form the components of a vector operator.

## Infinitesimal rotations

Consider infinitesimal rotations, where we can show that

\begin{aligned}\left[{V_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} V_k\end{aligned} \hspace{\stretch{1}}(2.5)

Note that for $V_i = J_i$ we recover the familiar commutator rules for angular momentum, but this also holds for operators $\mathbf{R}$, $\mathbf{P}$, $\mathbf{J}$, …

Note that

\begin{aligned}U^\dagger[M] = U[M^{-1}] = U[M^\text{T}],\end{aligned} \hspace{\stretch{1}}(2.6)

so

\begin{aligned}U^\dagger[M] V_i U^\dagger[M] = U^\dagger[M^\dagger] V_i U[M^\dagger] = \sum_j M_{ji} V_j\end{aligned} \hspace{\stretch{1}}(2.7)

so

\begin{aligned}{\left\langle {\psi} \right\rvert} V_i {\left\lvert {\psi} \right\rangle}={\left\langle {\psi} \right\rvert}U^\dagger[M] \Bigl( U[M] V_i U^\dagger[M] \Bigr) U[M]{\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

In the same way, suppose we have nine operators

\begin{aligned}\tau_{ij}, \qquad i, j = x, y, z\end{aligned} \hspace{\stretch{1}}(2.9)

that transform according to

\begin{aligned}U[M] \tau_{ij} U^\dagger[M] = \sum_{lm} M_{li} M_{mj} \tau_{lm}\end{aligned} \hspace{\stretch{1}}(2.10)

then we will call these the components of (Cartesian) a second rank tensor operator. Suppose that we have an operator $S$ that transforms

\begin{aligned}U[M] S U^\dagger[M] = S\end{aligned} \hspace{\stretch{1}}(2.11)

Then we will call $S$ a scalar operator.

## A problem.

This all looks good, but it is really not satisfactory. There is a problem.

Suppose that we have a Cartesian tensor operator like this, lets look at the quantity

\begin{aligned}\sum_i \tau_{ii}&=\sum_iU[M] \tau_{ii} U^\dagger[M] \\ &= \sum_i\sum_{lm} M_{li} M_{mi} \tau_{lm} \\ &= \sum_i\sum_{lm} M_{li} M_{im}^\text{T} \tau_{lm} \\ &= \sum_{lm} \delta_{lm} \tau_{lm} \\ &= \sum_{l} \tau_{ll} \end{aligned}

We see buried inside these Cartesian tensors of higher rank there is some simplicity embedded (in this case trace invariance). Who knows what other relationships are also there? We want to work with and extract the buried simplicities, and we will find that the Cartesian way of expressing these tensors is horribly inefficient. What is a representation that doesn’t have any excess information, and is in some sense minimal?

## How do we extract these buried simplicities?

Recall

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.12)

gives a linear combination of the ${\left\lvert {j m'} \right\rangle}$.

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} &=\sum_{m'} {\left\lvert {j m'} \right\rangle} {\left\langle {j m'} \right\rvert} U[M] {\left\lvert {j m''} \right\rangle} \\ &=\sum_{m'} {\left\lvert {j m'} \right\rangle} D^{(j)}_{m' m''}[M] \\ \end{aligned}

We’ve talked about before how these $D^{(j)}_{m' m''}[M]$ form a representation of the rotation group. These are in fact (not proved here) an irreducible representation.

Look at each element of $D^{(j)}_{m' m''}[M]$. These are matrices and will be different according to which rotation $M$ is chosen. There is some $M$ for which this element is nonzero. There’s no element in this matrix element that is zero for all possible $M$. There are more formal ways to think about this in a group theory context, but this is a physical way to think about this.

Think of these as the basis vectors for some eigenket of $J^2$.

\begin{aligned}{\left\lvert {\psi} \right\rangle} &= \sum_{m''} {\left\lvert {j m''} \right\rangle} \left\langle{{j m''}} \vert {{\psi}}\right\rangle \\ &= \sum_{m''} \bar{a}_{m''} {\left\lvert {j m''} \right\rangle}\end{aligned}

where

\begin{aligned}\bar{a}_{m''} = \left\langle{{j m''}} \vert {{\psi}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.13)

So

\begin{aligned}U[M] {\left\lvert {\psi} \right\rangle} = &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \left\langle{{j m'}} \vert {{\psi}}\right\rangle \\ &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} {\left\langle {j m''} \right\rvert}U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} D^{(j)}_{m'', m'}\bar{a}_{m'} \\ &= \sum_{m''} \tilde{a}_{m''} {\left\lvert {j m''} \right\rangle} \end{aligned}

where

\begin{aligned}\tilde{a}_{m''} = \sum_{m'} D^{(j)}_{m'', m'} \bar{a}_{m'} \\ \end{aligned} \hspace{\stretch{1}}(2.14)

Recall that

\begin{aligned}\tilde{r}_j = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.15)

Define $(2k + 1)$ operators ${T_k}^q$, $q = k, k-1, \cdots -k$ as the elements of a spherical tensor of rank $k$ if

\begin{aligned}U[M] {T_k}^q U^\dagger[M] = \sum_{q'} D^{(j)}_{q' q} {T_k}^{q'}\end{aligned} \hspace{\stretch{1}}(2.16)

Here we are looking for a better way to organize things, and it will turn out (not to be proved) that this will be an irreducible way to represent things.

## Examples.

We want to work though some examples of spherical tensors, and how they relate to Cartesian tensors. To do this, a motivating story needs to be told.

Let’s suppose that ${\left\lvert {\psi} \right\rangle}$ is a ket for a single particle. Perhaps we are talking about an electron without spin, and write

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle &= Y_{lm}(\theta, \phi) f(r) \\ &= \sum_{m''} \bar{a}_{m''} Y_{l m''}(\theta, \phi) \end{aligned}

for $\bar{a}_{m''} = \delta_{m'' m}$ and after dropping $f(r)$. So

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} =\sum_{m''} \sum_{m'} D^{(j)}_{m'' m} \bar{a}_{m'} Y_{l m''}(\theta, \phi) \end{aligned} \hspace{\stretch{1}}(2.17)

We are writing this in this particular way to make a point. Now also assume that

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.18)

so we find

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} &=\sum_{m''} Y_{l m''}(\theta, \phi) D^{(j)}_{m'' m} \\ &=Y_{l m}(\theta, \phi) \end{aligned}

\begin{aligned}Y_{l m}(\theta, \phi) = Y_{lm}(x, y, z)\end{aligned} \hspace{\stretch{1}}(2.19)

so

\begin{aligned}Y'_{l m}(x, y, z)= \sum_{m''} Y_{l m''}(x, y, z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.20)

Now consider the spherical harmonic as an operator $Y_{l m}(X, Y, Z)$

\begin{aligned}U[M] Y_{lm}(X, Y, Z) U^\dagger[M] =\sum_{m''} Y_{l m''}(X, Y, Z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.21)

So this is a way to generate spherical tensor operators of rank $0, 1, 2, \cdots$.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY456H1F: Quantum Mechanics II. Lecture 1 (Taught by Prof J.E. Sipe). Review: Composite systems

Posted by peeterjoot on September 15, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Peeter’s lecture notes from class. May not be entirely coherent.

# Composite systems.

This is apparently covered as a side effect in the text [1] in one of the advanced material sections. FIXME: what section?

Example, one spin one half particle and one spin one particle. We can describe either quantum mechanically, described by a pair of Hilbert spaces

\begin{aligned}H_1,\end{aligned} \hspace{\stretch{1}}(1.1)

of dimension $D_1$

\begin{aligned}H_2,\end{aligned} \hspace{\stretch{1}}(1.2)

of dimension $D_2$

Recall that a Hilbert space (finite or infinite dimensional) is the set of states that describe the system. There were some additional details (completeness, normalizable, $L2$ integrable, …) not really covered in the physics curriculum, but available in mathematical descriptions.

We form the composite (Hilbert) space

\begin{aligned}H = H_1 \otimes H_2\end{aligned} \hspace{\stretch{1}}(1.3)

\begin{aligned}H_1 : { {\lvert {\phi_1^{(i)}} \rangle} }\end{aligned} \hspace{\stretch{1}}(1.4)

for any ket in $H_1$

\begin{aligned}{\lvert {I} \rangle} = \sum_{i=1}^{D_1} c_i {\lvert {\phi_1^{(i)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.5)

where

\begin{aligned}{\langle { \phi_1^{(i)}} \rvert}{\lvert { \phi_1^{(j)}} \rangle} = \delta^{i j}\end{aligned} \hspace{\stretch{1}}(1.6)

Similarly

\begin{aligned}H_2 : { {\lvert {\phi_2^{(i)}} \rangle} }\end{aligned} \hspace{\stretch{1}}(1.7)

for any ket in $H_2$

\begin{aligned}{\lvert {II} \rangle} = \sum_{i=1}^{D_2} d_i {\lvert {\phi_2^{(i)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.8)

where

\begin{aligned}{\langle { \phi_2^{(i)}} \rvert}{\lvert { \phi_2^{(j)}} \rangle} = \delta^{i j}\end{aligned} \hspace{\stretch{1}}(1.9)

The composite Hilbert space has dimension $D_1 D_2$

basis kets:

\begin{aligned}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} = {\lvert { \phi^{(ij)}} \rangle},\end{aligned} \hspace{\stretch{1}}(1.10)

where

\begin{aligned}{\langle { \phi^{(ij)}} \rvert}{\lvert { \phi^{(kl)}} \rangle} = \delta^{ik} \delta^{jl}.\end{aligned} \hspace{\stretch{1}}(1.11)

Any ket in $H$ can be written

\begin{aligned}{\lvert {\psi} \rangle} &= \sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \\ &= \sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi^{(ij)}} \rangle}.\end{aligned}

Direct product of kets:

\begin{aligned}{\lvert {I} \rangle} \otimes {\lvert {II} \rangle} &\equiv\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}c_i d_j{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \\ &=\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}c_i d_j{\lvert { \phi^{(ij)}} \rangle} \end{aligned}

If ${\lvert {\psi} \rangle}$ in $H$ cannot be written as ${\lvert {I} \rangle} \otimes {\lvert {II} \rangle}$, then ${\lvert {\psi} \rangle}$ is said to be “entangled”.

FIXME: insert a concrete example of this, with some low dimension.

## Operators.

With operators $\mathcal{O}_1$ and $\mathcal{O}_2$ on the respective Hilbert spaces. We’d now like to build

\begin{aligned}\mathcal{O}_1 \otimes \mathcal{O}_2\end{aligned} \hspace{\stretch{1}}(1.12)

If one defines

\begin{aligned}\mathcal{O}_1 \otimes \mathcal{O}_2\equiv\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \mathcal{O}_2 \phi_2^{(j)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.13)

Q:Can every operator that can be defined on the composite space have a representation of this form?

No.

Special cases. The identity operators. Suppose that

\begin{aligned}{\lvert {\psi} \rangle}=\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.14)

then

\begin{aligned}(\mathcal{O}_1 \otimes \mathcal{I}_2) {\lvert {\psi} \rangle}=\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.15)

### Example commutator.

Can do other operations. Example:

\begin{aligned}\left[{ \mathcal{O}_1 \otimes \mathcal{I}_2 },{ \mathcal{I}_1 \otimes \mathcal{O}_2 }\right] = 0\end{aligned} \hspace{\stretch{1}}(1.16)

Let’s verify this one. Suppose that our state has the representation

\begin{aligned}{\lvert {\psi} \rangle} = \sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle}\end{aligned} \hspace{\stretch{1}}(1.17)

so that the action on this ket from the composite operations are

\begin{aligned}(\mathcal{O}_1 \otimes \mathcal{I}_2){\lvert {\psi} \rangle} &= \sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \\ (\mathcal{I}_1 \otimes \mathcal{O}_2){\lvert {\psi} \rangle} &= \sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \mathcal{O}_2 \phi_2^{(j)}} \rangle}\end{aligned} \hspace{\stretch{1}}(1.18)

Our commutator is

\begin{aligned}\left[{(\mathcal{O}_1 \otimes \mathcal{I}_2)},{(\mathcal{I}_1 \otimes \mathcal{O}_2)}\right]{\lvert {\psi} \rangle} &=(\mathcal{O}_1 \otimes \mathcal{I}_2)(\mathcal{I}_1 \otimes \mathcal{O}_2) {\lvert {\psi} \rangle} -(\mathcal{I}_1 \otimes \mathcal{O}_2)(\mathcal{O}_1 \otimes \mathcal{I}_2){\lvert {\psi} \rangle} \\ &=(\mathcal{O}_1 \otimes \mathcal{I}_2)\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \phi_1^{(i)}} \rangle} \otimes {\lvert { \mathcal{O}_2 \phi_2^{(j)}} \rangle}-(\mathcal{I}_1 \otimes \mathcal{O}_2)\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \phi_2^{(j)}} \rangle} \\ &=\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \mathcal{O}_2 \phi_2^{(j)}} \rangle}-\sum_{i = 1}^{D_1}\sum_{j = 1}^{D_2}f_{ij}{\lvert { \mathcal{O}_1 \phi_1^{(i)}} \rangle} \otimes {\lvert { \mathcal{O}_2 \phi_2^{(j)}} \rangle} \\ &=0 \qquad \square\end{aligned}

### Generalizations.

Can generalize to

\begin{aligned}H_1 \otimes H_2 \otimes H_3 \otimes \cdots\end{aligned} \hspace{\stretch{1}}(1.20)

Can also start with $H$ and seek factor spaces. If $H$ is not prime there are, in general, many ways to find factor spaces

\begin{aligned}H = H_1 \otimes H_2 =H_1' \otimes H_2'\end{aligned} \hspace{\stretch{1}}(1.21)

A ket ${\lvert {\psi} \rangle}$, if unentangled in the first factor space, then it will be in general entangled in a second space. Thus ket entanglement is not a property of the ket itself, but instead is intrinsically related to the space in which it is represented.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Harmonic Oscillator position and momentum Hamiltonian operators

Posted by peeterjoot on December 18, 2010

# Motivation.

Hamiltonian problem from Chapter 9 of [1].

## Problem 1.

### Statement.

Assume $x(t)$ and $p(t)$ to be Heisenberg operators with $x(0) = x_0$ and $p(0) = p_0$. For a Hamiltonian corresponding to the harmonic oscillator show that

\begin{aligned}x(t) &= x_0 \cos \omega t + \frac{p_0}{m \omega} \sin \omega t \\ p(t) &= p_0 \cos \omega t - m \omega x_0 \sin \omega t.\end{aligned} \hspace{\stretch{1}}(3.1)

### Solution.

Recall that the Hamiltonian operators were defined by factoring out the time evolution from a set of states

\begin{aligned}{\langle {\alpha(t) } \rvert} A {\lvert { \beta(t) } \rangle}={\langle {\alpha(0) } \rvert} e^{i H t/\hbar} A e^{-i H t/\hbar} {\lvert { \beta(0) } \rangle}.\end{aligned} \hspace{\stretch{1}}(3.3)

So one way to complete the task is to compute these exponential sandwiches. Recall from the appendix of chapter 10, that we have

\begin{aligned}e^A B e^{-A}= B + \left[{A},{B}\right]+ \frac{1}{{2!}} \left[{A},{\left[{A},{B}\right]}\right] + \cdots\end{aligned} \hspace{\stretch{1}}(3.4)

Perhaps there is also some smarter way to do this, but lets first try the obvious way.

Let’s summarize the variables we will work with

\begin{aligned}\alpha &= \sqrt{\frac{m \omega}{\hbar}} \\ X &= \frac{1}{{\alpha \sqrt{2}}} ( a + a^\dagger ) \\ P &= -i \hbar \frac{\alpha}{\sqrt{2}} ( a - a^\dagger ) \\ H &= \hbar \omega ( a^\dagger a + 1/2 ) \\ \left[{a},{a^\dagger}\right] &= 1 \end{aligned} \hspace{\stretch{1}}(3.5)

The operator in the exponential sandwich is

\begin{aligned}A = i H t/\hbar = i \omega t ( a^\dagger a + 1/2 )\end{aligned} \hspace{\stretch{1}}(3.10)

Note that the constant $1/2$ factor will commute with all operators, which reduces the computation required

\begin{aligned}\antisymmetric{i H t/\hbar} {B } = (i\omega t) \left[{a^\dagger a},{B}\right]\end{aligned} \hspace{\stretch{1}}(3.11)

For $B = X$, or $B = P$, we’ll want some intermediate results

\begin{aligned}\left[{a^\dagger a},{a}\right]&=a^\dagger a a - a a^\dagger a \\ &=a^\dagger a a - (a^\dagger a + 1) a \\ &=-a,\end{aligned}

and

\begin{aligned}\left[{a^\dagger a},{a^\dagger}\right]&=a^\dagger a a^\dagger - a^\dagger a^\dagger a \\ &=a^\dagger a a^\dagger - a^\dagger (a a^\dagger -1) \\ &=a^\dagger\end{aligned}

Using these we can evaluate the commutators for the position and momentum operators. For position we have

\begin{aligned}\left[{i H t /\hbar },{X}\right] &= (i \omega t) \frac{1}{{\alpha \sqrt{2}}} \left[{a^\dagger a},{a+ a^\dagger}\right] \\ &= (i \omega t) \frac{1}{{\alpha \sqrt{2}}} (-a + a^\dagger ) \\ &= \frac{\omega t}{\alpha^2} \frac{-i \hbar \alpha}{ \sqrt{2}} (a - a^\dagger ).\end{aligned}

Since $\alpha^2 \hbar = m \omega$, we have

\begin{aligned}\left[{i H t /\hbar },{X}\right] = (\omega t) \frac{P}{m \omega }.\end{aligned} \hspace{\stretch{1}}(3.12)

For the momentum operator we have

\begin{aligned}\left[{i H t /\hbar },{P}\right] &= (i \omega t) \frac{-i \hbar \alpha}{ \sqrt{2}} \left[{a^\dagger a},{a- a^\dagger}\right] \\ &= (i \omega t) \frac{i \hbar \alpha}{ \sqrt{2}} (a + a^\dagger) \\ &= (\omega t) (\hbar \alpha^2) X\end{aligned}

So we have

\begin{aligned}\left[{i H t /\hbar },{P}\right] = (-\omega t) (m \omega ) X\end{aligned} \hspace{\stretch{1}}(3.13)

The expansion of the exponential series of nested commutators can now be written down by inspection and we get

\begin{aligned}X_H = X + (\omega t) \frac{P}{m \omega} - \frac{(\omega t)^2}{2!} X - \frac{(\omega t)^3}{3!} \frac{P}{m \omega} + \cdots\end{aligned} \hspace{\stretch{1}}(3.14)

\begin{aligned}P_H = P - (\omega t) (m \omega)X - \frac{(\omega t)^2}{2!} P + \frac{(\omega t)^3}{3!} (m \omega)X + \cdots\end{aligned} \hspace{\stretch{1}}(3.15)

Collection of terms gives us the desired answer

\begin{aligned}X_H = X \cos(\omega t) + \frac{P}{m \omega} \sin(\omega t)\end{aligned} \hspace{\stretch{1}}(3.16)

\begin{aligned}P_H = P \cos(\omega t) - (m \omega) X \sin(\omega t)\end{aligned} \hspace{\stretch{1}}(3.17)

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Notes for Desai Chapter 26

Posted by peeterjoot on December 9, 2010

# Motivation.

Chapter 26 notes for [1].

# Guts

## Trig relations.

To verify equations 26.3-5 in the text it’s worth noting that

\begin{aligned}\cos(a + b) &= \Re( e^{ia} e^{ib} ) \\ &= \Re( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \cos b - \sin a \sin b\end{aligned}

and

\begin{aligned}\sin(a + b) &= \Im( e^{ia} e^{ib} ) \\ &= \Im( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \sin b + \sin a \cos b\end{aligned}

So, for

\begin{aligned}x &= \rho \cos\alpha \\ y &= \rho \sin\alpha \end{aligned} \hspace{\stretch{1}}(2.1)

the transformed coordinates are

\begin{aligned}x' &= \rho \cos(\alpha + \phi) \\ &= \rho (\cos \alpha \cos \phi - \sin \alpha \sin \phi) \\ &= x \cos \phi - y \sin \phi\end{aligned}

and

\begin{aligned}y' &= \rho \sin(\alpha + \phi) \\ &= \rho (\cos \alpha \sin \phi + \sin \alpha \cos \phi) \\ &= x \sin \phi + y \cos \phi \\ \end{aligned}

This allows us to read off the rotation matrix. Without all the messy trig, we can also derive this matrix with geometric algebra.

\begin{aligned}\mathbf{v}' &= e^{- \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \mathbf{v} e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) (\cos \phi + \mathbf{e}_1 \mathbf{e}_2 \sin\phi) \\ &= v_3 \mathbf{e}_3 + \mathbf{e}_1 (v_1 \cos\phi - v_2 \sin\phi)+ \mathbf{e}_2 (v_2 \cos\phi + v_1 \sin\phi)\end{aligned}

Here we use the Pauli-matrix like identities

\begin{aligned}\mathbf{e}_k^2 &= 1 \\ \mathbf{e}_i \mathbf{e}_j &= -\mathbf{e}_j \mathbf{e}_i,\quad i\ne j\end{aligned} \hspace{\stretch{1}}(2.3)

and also note that $\mathbf{e}_3$ commutes with the bivector for the $x,y$ plane $\mathbf{e}_1 \mathbf{e}_2$. We can also read off the rotation matrix from this.

## Infinitesimal transformations.

Recall that in the problems of Chapter 5, one representation of spin one matrices were calculated [2]. Since the choice of the basis vectors was arbitrary in that exersize, we ended up with a different representation. For $S_x, S_y, S_z$ as found in (26.20) and (26.23) we can also verify easily that we have eigenvalues $0, \pm \hbar$. We can also show that our spin kets in this non-diagonal representation have the following column matrix representations:

\begin{aligned}{\lvert {1,\pm 1} \rangle}_x &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}0 \\ 1 \\ \pm i\end{bmatrix} \\ {\lvert {1,0} \rangle}_x &=\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_y &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}\pm i \\ 0 \\ 1 \end{bmatrix} \\ {\lvert {1,0} \rangle}_y &=\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_z &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}1 \\ \pm i \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle}_z &=\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(2.5)

## Verifying the commutator relations.

Given the (summation convention) matrix representation for the spin one operators

\begin{aligned}(S_i)_{jk} = - i \hbar \epsilon_{ijk},\end{aligned} \hspace{\stretch{1}}(2.11)

let’s demonstrate the commutator relation of (26.25).

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=(S_i S_j - S_j S_i)_{rs} \\ &=\sum_t (S_i)_{rt} (S_j)_{ts} - (S_j)_{rt} (S_i)_{ts} \\ &=(-i\hbar)^2 \sum_t \epsilon_{irt} \epsilon_{jts} - \epsilon_{jrt} \epsilon_{its} \\ &=-(-i\hbar)^2 \sum_t \epsilon_{tir} \epsilon_{tjs} - \epsilon_{tjr} \epsilon_{tis} \\ \end{aligned}

Now we can employ the summation rule for sums products of antisymmetic tensors over one free index (4.179)

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab}= \delta_{ja}\delta_{kb}-\delta_{jb}\delta_{ka}.\end{aligned} \hspace{\stretch{1}}(2.12)

Continuing we get

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=-(-i\hbar)^2 \left(\delta_{ij}\delta_{rs}-\delta_{is}\delta_{rj}-\delta_{ji}\delta_{rs}+\delta_{js}\delta_{ri} \right) \\ &=(-i\hbar)^2 \left( \delta_{is}\delta_{jr}-\delta_{ir} \delta_{js}\right)\\ &=(-i\hbar)^2 \sum_t \epsilon_{tij} \epsilon_{tsr}\\ &=i\hbar \sum_t \epsilon_{tij} (S_t)_{rs}\qquad\square\end{aligned}

## General infinitesimal rotation.

Equation (26.26) has for an infinitesimal rotation counterclockwise around the unit axis of rotation vector $\mathbf{n}$

\begin{aligned}\mathbf{V}' = \mathbf{V} + \epsilon \mathbf{n} \times \mathbf{V}.\end{aligned} \hspace{\stretch{1}}(2.13)

Let’s derive this using the geometric algebra rotation expression for the same

\begin{aligned}\mathbf{V}' &=e^{-I\mathbf{n} \alpha/2}\mathbf{V} e^{I\mathbf{n} \alpha/2} \\ &=e^{-I\mathbf{n} \alpha/2}\left((\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)e^{I\mathbf{n} \alpha/2} \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\Bne^{I\mathbf{n} \alpha}\end{aligned}

We note that $I\mathbf{n}$ and thus the exponential commutes with $\mathbf{n}$, and the projection component in the normal direction. Similarily $I\mathbf{n}$ anticommutes with $(\mathbf{V} \wedge \mathbf{n}) \mathbf{n}$. This leaves us with

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}\left(+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)( \cos \alpha + I \mathbf{n} \sin\alpha)\end{aligned}

For $\alpha = \epsilon \rightarrow 0$, this is

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}( 1 + I \mathbf{n} \epsilon) \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n} +(\mathbf{V} \wedge \mathbf{n})\mathbf{n}+\epsilon I^2(\mathbf{V} \times \mathbf{n})\mathbf{n}^2 \\ &=\mathbf{V}+ \epsilon (\mathbf{n} \times \mathbf{V}) \qquad\square\end{aligned}

## Position and angular momentum commutator.

Equation (26.71) is

\begin{aligned}\left[{x_i},{L_j}\right] = i \hbar \epsilon_{ijk} x_k.\end{aligned} \hspace{\stretch{1}}(2.14)

Let’s derive this. Recall that we have for the position-momentum commutator

\begin{aligned}\left[{x_i},{p_j}\right] = i \hbar \delta_{ij},\end{aligned} \hspace{\stretch{1}}(2.15)

and for each of the angular momentum operator components we have

\begin{aligned}L_m = \epsilon_{mab} x_a p_b.\end{aligned} \hspace{\stretch{1}}(2.16)

The commutator of interest is thus

\begin{aligned}\left[{x_i},{L_j}\right] &= x_i \epsilon_{jab} x_a p_b -\epsilon_{jab} x_a p_b x_i \\ &= \epsilon_{jab} x_a\left(x_i p_b -p_b x_i \right) \\ &=\epsilon_{jab} x_ai \hbar \delta_{ib} \\ &=i \hbar \epsilon_{jai} x_a \\ &=i \hbar \epsilon_{ija} x_a \qquad\square\end{aligned}

## A note on the angular momentum operator exponential sandwiches.

In (26.73-74) we have

\begin{aligned}e^{i \epsilon L_z/\hbar} x e^{-i \epsilon L_z/\hbar} = x + \frac{i \epsilon}{\hbar} \left[{L_z},{x}\right]\end{aligned} \hspace{\stretch{1}}(2.17)

Observe that

\begin{aligned}\left[{x},{\left[{L_z},{x}\right]}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.18)

so from the first two terms of (10.99)

\begin{aligned}e^{A} B e^{-A}= B + \left[{A},{B}\right]+\frac{1}{{2}} \left[{A},{\left[{A},{B}\right]}\right] \cdots\end{aligned} \hspace{\stretch{1}}(2.19)

we get the desired result.

## Trace relation to the determinant.

Going from (26.90) to (26.91) we appear to have a mystery identity

\begin{aligned}\det \left( \mathbf{1} + \mu \mathbf{A} \right) = 1 + \mu \text{Tr} \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.20)

According to wikipedia, under derivative of a determinant, [3], this is good for small $\mu$, and related to something called the Jacobi identity. Someday I should really get around to studying determinants in depth, and will take this one for granted for now.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Peeter Joot. Notes and problems for Desai Chapter V. [online]. http://sites.google.com/site/peeterjoot/math2010/desaiCh5.pdf.

[3] Wikipedia. Determinant — wikipedia, the free encyclopedia [online]. 2010. [Online; accessed 10-December-2010]. http://en.wikipedia.org/w/index.php?title=Determinant&oldid=400983667.

## Multivector commutators and Lorentz boosts.

Posted by peeterjoot on October 31, 2010

# Motivation.

In some reading there I found that the electrodynamic field components transform in a reversed sense to that of vectors, where instead of the perpendicular to the boost direction remaining unaffected, those are the parts that are altered.

To explore this, look at the Lorentz boost action on a multivector, utilizing symmetric and antisymmetric products to split that vector into portions effected and unaffected by the boost. For the bivector (electrodynamic case) and the four vector case, examine how these map to dot and wedge (or cross) products.

The underlying motivator for this boost consideration is an attempt to see where equation (6.70) of [1] comes from. We get to this by the very end.

# Guts.

## Structure of the bivector boost.

Recall that we can write our Lorentz boost in exponential form with

\begin{aligned}L &= e^{\alpha \boldsymbol{\sigma}/2} \\ X' &= L^\dagger X L,\end{aligned} \hspace{\stretch{1}}(2.1)

where $\boldsymbol{\sigma}$ is a spatial vector. This works for our bivector field too, assuming the composite transformation is an outermorphism of the transformed four vectors. Applying the boost to both the gradient and the potential our transformed field is then

\begin{aligned}F' &= \nabla' \wedge A' \\ &= (L^\dagger \nabla L) \wedge (L^\dagger A L) \\ &= \frac{1}{{2}} \left((L^\dagger \stackrel{ \rightarrow }{\nabla} L) (L^\dagger A L) -(L^\dagger A L) (L^\dagger \stackrel{ \leftarrow }{\nabla} L)\right) \\ &= \frac{1}{{2}} L^\dagger \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) L \\ &= L^\dagger (\nabla \wedge A) L.\end{aligned}

Note that arrows were used briefly to indicate that the partials of the gradient are still acting on $A$ despite their vector components being to one side. We are left with the very simple transformation rule

\begin{aligned}F' = L^\dagger F L,\end{aligned} \hspace{\stretch{1}}(2.3)

which has exactly the same structure as the four vector boost.

## Employing the commutator and anticommutator to find the parallel and perpendicular components.

If we apply the boost to a four vector, those components of the four vector that commute with the spatial direction $\boldsymbol{\sigma}$ are unaffected. As an example, which also serves to ensure we have the sign of the rapidity angle $\alpha$ correct, consider $\boldsymbol{\sigma} = \boldsymbol{\sigma}_1$. We have

\begin{aligned}X' = e^{-\alpha \boldsymbol{\sigma}/2} ( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 ) (\cosh \alpha/2 + \gamma_1 \gamma_0 \sinh \alpha/2 )\end{aligned} \hspace{\stretch{1}}(2.4)

We observe that the scalar and $\boldsymbol{\sigma}_1 = \gamma_1 \gamma_0$ components of the exponential commute with $\gamma_2$ and $\gamma_3$ since there is no vector in common, but that $\boldsymbol{\sigma}_1$ anticommutes with $\gamma_0$ and $\gamma_1$. We can therefore write

\begin{aligned}X' &= x^2 \gamma_2 + x^3 \gamma_3 +( x^0 \gamma_0 + x^1 \gamma_1 + ) (\cosh \alpha + \gamma_1 \gamma_0 \sinh \alpha ) \\ &= x^2 \gamma_2 + x^3 \gamma_3 +\gamma_0 ( x^0 \cosh\alpha - x^1 \sinh \alpha )+ \gamma_1 ( x^1 \cosh\alpha - x^0 \sinh \alpha )\end{aligned}

reproducing the familiar matrix result should we choose to write it out. How can we express the commutation property without resorting to components. We could write the four vector as a spatial and timelike component, as in

\begin{aligned}X = x^0 \gamma_0 + \mathbf{x} \gamma_0,\end{aligned} \hspace{\stretch{1}}(2.5)

and further separate that into components parallel and perpendicular to the spatial unit vector $\boldsymbol{\sigma}$ as

\begin{aligned}X = x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 + (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0.\end{aligned} \hspace{\stretch{1}}(2.6)

However, it would be nicer to group the first two terms together, since they are ones that are affected by the transformation. It would also be nice to not have to resort to spatial dot and wedge products, since we get into trouble too easily if we try to mix dot and wedge products of four vector and spatial vector components.

What we can do is employ symmetric and antisymmetric products (the anticommutator and commutator respectively). Recall that we can write any multivector product this way, and in particular

\begin{aligned}M \boldsymbol{\sigma} = \frac{1}{{2}} (M \boldsymbol{\sigma} + \boldsymbol{\sigma} M) + \frac{1}{{2}} (M \boldsymbol{\sigma} - \boldsymbol{\sigma} M).\end{aligned} \hspace{\stretch{1}}(2.7)

Left multiplying by the unit spatial vector $\boldsymbol{\sigma}$ we have

\begin{aligned}M = \frac{1}{{2}} (M + \boldsymbol{\sigma} M \boldsymbol{\sigma}) + \frac{1}{{2}} (M - \boldsymbol{\sigma} M \boldsymbol{\sigma}) = \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.8)

When $M = \mathbf{a}$ is a spatial vector this is our familiar split into parallel and perpendicular components with the respective projection and rejection operators

\begin{aligned}\mathbf{a} = \frac{1}{{2}} \left\{\mathbf{a},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{\mathbf{a}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} = (\mathbf{a} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + (\mathbf{a} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.9)

However, the more general split employing symmetric and antisymmetric products in 2.8, is something we can use for our four vector and bivector objects too.

Observe that we have the commutation and anti-commutation relationships

\begin{aligned}\left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= \boldsymbol{\sigma} \left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \\ \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= -\boldsymbol{\sigma} \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right).\end{aligned} \hspace{\stretch{1}}(2.10)

This split therefore serves to separate the multivector object in question nicely into the portions that are acted on by the Lorentz boost, or left unaffected.

## Application of the symmetric and antisymmetric split to the bivector field.

Let’s apply 2.8 to the spacetime event $X$ again with an x-axis boost $\sigma = \sigma_1$. The anticommutator portion of X in this boost direction is

\begin{aligned}\frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}_1}\right\} \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)+\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^2 \gamma_2 + x^3 \gamma_3,\end{aligned}

whereas the commutator portion gives us

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}_1}\right] \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)-\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^0 \gamma_0 + x^1 \gamma_1.\end{aligned}

We’ve seen that only these commutator portions are acted on by the boost. We have therefore found the desired logical grouping of the four vector $X$ into portions that are left unchanged by the boost and those that are affected. That is

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \\ \frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \end{aligned} \hspace{\stretch{1}}(2.12)

Let’s now return to the bivector field $F = \nabla \wedge A = \mathbf{E} + I c \mathbf{B}$, and split that multivector into boostable and unboostable portions with the commutator and anticommutator respectively.

Observing that our pseudoscalar $I$ commutes with all spatial vectors we have for the anticommutator parts that will not be affected by the boost

\begin{aligned}\frac{1}{{2}} \left\{{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma},\end{aligned} \hspace{\stretch{1}}(2.14)

and for the components that will be boosted we have

\begin{aligned}\frac{1}{{2}} \left[{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.15)

For the four vector case we saw that the components that lay “perpendicular” to the boost direction, were unaffected by the boost. For the field we see the opposite, and the components of the individual electric and magnetic fields that are parallel to the boost direction are unaffected.

Our boosted field is therefore

\begin{aligned}F' = (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma}+ \left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)\end{aligned} \hspace{\stretch{1}}(2.16)

Focusing on just the non-parallel terms we have

\begin{aligned}\left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)&=(\mathbf{E}_\perp + I c \mathbf{B}_\perp ) \cosh\alpha+(I \mathbf{E} \times \boldsymbol{\sigma} - c \mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha \\ &=\mathbf{E}_\perp \cosh\alpha - c (\mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha + I ( c \mathbf{B}_\perp \cosh\alpha + (\mathbf{E} \times \boldsymbol{\sigma}) \sinh\alpha ) \\ &=\gamma \left(\mathbf{E}_\perp - c (\mathbf{B} \times \boldsymbol{\sigma} ) {\left\lvert{\mathbf{v}}\right\rvert}/c+ I ( c \mathbf{B}_\perp + (\mathbf{E} \times \boldsymbol{\sigma}) {\left\lvert{\mathbf{v}}\right\rvert}/c) \right)\end{aligned}

A final regrouping gives us

\begin{aligned}F'&=\mathbf{E}_\parallel + \gamma \left( \mathbf{E}_\perp - \mathbf{B} \times \mathbf{v} \right)+I c \left( \mathbf{B}_\parallel + \gamma \left( \mathbf{B}_\perp + \mathbf{E} \times \mathbf{v}/c^2 \right) \right)\end{aligned} \hspace{\stretch{1}}(2.17)

In particular when we consider the proton, electron system as in equation (6.70) of [1] where it is stated that the electron will feel a magnetic field given by

\begin{aligned}\mathbf{B} = - \frac{\mathbf{v}}{c} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.18)

we can see where this comes from. If $F = \mathbf{E} + I c (0)$ is the field acting on the electron, then application of a $\mathbf{v}$ boost to the electron perpendicular to the field (ie: radial motion), we get

\begin{aligned}F' = I c \gamma \mathbf{E} \times \mathbf{v}/c^2 =-I c \gamma \frac{\mathbf{v}}{c^2} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.19)

We also have an additional $1/c$ factor in our result, but that’s a consequence of the choice of units where the dimensions of $\mathbf{E}$ match $c \mathbf{B}$, whereas in the text we have $\mathbf{E}$ and $\mathbf{B}$ in the same units. We also have an additional $\gamma$ factor, so we must presume that ${\left\lvert{\mathbf{v}}\right\rvert} << c$ in this portion of the text. That is actually a requirement here, for if the electron was already in motion, we'd have to boost a field that also included a magnetic component. A consequence of this is that the final interaction Hamiltonian of (6.75) is necessarily non-relativistic.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Notes and problems for Desai chapter IV.

Posted by peeterjoot on October 12, 2010

# Notes.

Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\begin{itemize}
\item Antisymmetric tensor summation identity.

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab} = \delta_{ja} \delta_{kb} - \delta_{jb}\delta_{ka}\end{aligned} \hspace{\stretch{1}}(1.1)

This is obviously the coordinate equivalent of the dot product of two bivectors

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b) &=( (\mathbf{e}_j \wedge \mathbf{e}_k) \cdot \mathbf{e}_a ) \cdot \mathbf{e}_b) =\delta_{ka}\delta_{jb} - \delta_{ja}\delta_{kb}\end{aligned} \hspace{\stretch{1}}(1.2)

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b)&= \sum_{ie} \left\langle{{\epsilon_{ijk} \mathbf{e}_j \mathbf{e}_k \epsilon_{eab} \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{(\mathbf{e}_i \mathbf{e}_i) \mathbf{e}_j \mathbf{e}_k (\mathbf{e}_e \mathbf{e}_e) \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{\mathbf{e}_i \mathbf{e}_e I^2}}\right\rangle \\ &=-\sum_{ie} \epsilon_{ijk} \epsilon_{eab} \delta_{ie} \\ &=-\sum_i\epsilon_{ijk} \epsilon_{iab}\qquad\square\end{aligned}

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi(L_{-} Y_{lm})^\dagger L_{-} Y_{lm} \sin\theta\end{aligned}

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi L_{-}^{*} Y_{lm}^{*}L_{-} Y_{lm} \sin\theta\end{aligned}

How does he end up with the $L_{-}$ and the $Y_{lm}^{*}$ interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators $L_{\pm}$, where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the $L_{-}$ commutation with $\mathbf{L}^2$ implies this?

\end{itemize}

# Problems

## Problem 1.

### Statement.

Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

### Cartesian case.

For the Cartesian coordinates case we have

\begin{aligned}H = -\frac{\hbar^2}{2m} (\partial_{xx} + \partial_{yy}) = i \hbar \partial_t\end{aligned} \hspace{\stretch{1}}(2.3)

Application of separation of variables with $\Psi = XYT$ gives

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{X''}{X} +\frac{Y''}{Y} \right) = i \hbar \frac{T'}{T} = E .\end{aligned} \hspace{\stretch{1}}(2.4)

Immediately, we have the time dependence

\begin{aligned}T \propto e^{-i E t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.5)

with the PDE reduced to

\begin{aligned}\frac{X''}{X} +\frac{Y''}{Y} = - \frac{2m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

Introducing separate independent constants

\begin{aligned}\frac{X''}{X} &= a^2 \\ \frac{Y''}{Y} &= b^2 \end{aligned} \hspace{\stretch{1}}(2.7)

provides the pre-normalized wave function and the constraints on the constants

\begin{aligned}\Psi &= C e^{ax}e^{by}e^{-iE t/\hbar} \\ a^2 + b^2 &= -\frac{2 m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.9)

### Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

\begin{aligned}e^{ax} &= e^{a(x + \lambda_x)} \\ e^{ay} &= e^{a(y + \lambda_y)} ,\end{aligned} \hspace{\stretch{1}}(2.11)

or

\begin{aligned}a\lambda_x &= 2 \pi i m \\ a\lambda_y &= 2 \pi i n.\end{aligned} \hspace{\stretch{1}}(2.13)

This provides a more explicit form for the energy expression

\begin{aligned}E_{mn} &= \frac{1}{{2m}} 4 \pi^2 \hbar^2 \left( \frac{m^2}{{\lambda_x}^2}+\frac{n^2}{{\lambda_y}^2}\right).\end{aligned} \hspace{\stretch{1}}(2.15)

We can also add in the area normalization using

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{x=0}^{\lambda_x} dx\int_{y=0}^{\lambda_x} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.16)

Our eigenfunctions are now completely specified

\begin{aligned}u_{mn}(x,y,t) &= \frac{1}{{\sqrt{\lambda_x \lambda_y}}}e^{2 \pi i x/\lambda_x}e^{2 \pi i y/\lambda_y}e^{-iE t/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.17)

\begin{aligned}f(x,y) = a_{mn} u_{mn}\end{aligned} \hspace{\stretch{1}}(2.18)

and then “solve” for $a_{mn}$, for an arbitrary $f(x,y)$ by taking inner products

\begin{aligned}a_{mn} = \left\langle{{u_mn}} \vert {{f}}\right\rangle =\int_{x=0}^{\lambda_x} dx \int_{y=0}^{\lambda_x} dy f(x,y) u_mn^{*}(x,y).\end{aligned} \hspace{\stretch{1}}(2.19)

This gives the appearance that any function $f(x,y)$ is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions $f(x,y)$, but the equality really means that the RHS will be the periodic extension of $f(x,y)$.

### Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

\begin{aligned}\frac{2 \pi m }{\lambda_x} &= k_x \\ \frac{2 \pi n }{\lambda_y} &= k_y \end{aligned} \hspace{\stretch{1}}(2.20)

Our inner product is now

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^{\infty} dx\int_{\infty}^{\infty} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.22)

And the corresponding normalized wavefunction and associated energy constant $E$ are

\begin{aligned}u_{\mathbf{k}}(x,y,t) &= \frac{1}{{2\pi}}e^{i k_x x}e^{i k_y y}e^{-iE t/\hbar} = \frac{1}{{2\pi}}e^{i \mathbf{k} \cdot \mathbf{x}}e^{-iE t/\hbar} \\ E &= \frac{\hbar^2 \mathbf{k}^2 }{2m}\end{aligned} \hspace{\stretch{1}}(2.23)

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

### Polar case.

In polar coordinates our gradient is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta.\end{aligned} \hspace{\stretch{1}}(2.25)

with

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} .\end{aligned} \hspace{\stretch{1}}(2.26)

Squaring the gradient for the Laplacian we’ll need the partials, which are

\begin{aligned}\partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= -\hat{\mathbf{r}}.\end{aligned}

The Laplacian is therefore

\begin{aligned}\boldsymbol{\nabla}^2 &= (\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \cdot(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \hat{\mathbf{r}} \partial_r \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\mathbf{r}}) \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\boldsymbol{\theta}}) \frac{1}{{r}} \partial_\theta .\end{aligned}

Evalating the derivatives we have

\begin{aligned}\boldsymbol{\nabla}^2 = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{r^2} \partial_{\theta\theta},\end{aligned} \hspace{\stretch{1}}(2.28)

and are now prepared to move on to the solution of the Hamiltonian $H = -(\hbar^2/2m) \boldsymbol{\nabla}^2$. With separation of variables again using $\Psi = R(r) \Theta(\theta) T(t)$ we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{R''}{R} + \frac{R'}{rR} + \frac{1}{{r^2}} \frac{\Theta''}{\Theta} \right) = i \hbar \frac{T'}{T} = E.\end{aligned} \hspace{\stretch{1}}(2.29)

Rearranging to separate the $\Theta$ term we have

\begin{aligned}\frac{r^2 R''}{R} + \frac{r R'}{R} + \frac{2 m E}{\hbar^2} r^2 E = -\frac{\Theta''}{\Theta} = \lambda^2.\end{aligned} \hspace{\stretch{1}}(2.30)

The angular solutions are given by

\begin{aligned}\Theta = \frac{1}{{\sqrt{2\pi}}} e^{i \lambda \theta}\end{aligned} \hspace{\stretch{1}}(2.31)

Where the normalization is given by

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{0}^{2 \pi} d\theta \psi^{*}(\theta) \phi(\theta).\end{aligned} \hspace{\stretch{1}}(2.32)

And the radial by the solution of the PDE

\begin{aligned}r^2 R'' + r R' + \left( \frac{2 m E}{\hbar^2} r^2 E - \lambda^2 \right) R = 0\end{aligned} \hspace{\stretch{1}}(2.33)

## Problem 2.

### Statement.

Use the orthogonality property of $P_l(\cos\theta)$

\begin{aligned}\int_{-1}^1 dx P_l(x) P_{l'}(x) = \frac{2}{2l+1} \delta_{l l'},\end{aligned} \hspace{\stretch{1}}(2.34)

confirm that at least the first two terms of (4.171)

\begin{aligned}e^{i k r \cos\theta} = \sum_{l=0}^\infty (2l + 1) i^l j_l(kr) P_l(\cos\theta)\end{aligned} \hspace{\stretch{1}}(2.35)

are correct.

### Solution.

Taking the inner product using the integral of 2.34 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} P_l'(x) = 2 i^l j_l(kr) \end{aligned} \hspace{\stretch{1}}(2.36)

To confirm the first two terms we need

\begin{aligned}P_0(x) &= 1 \\ P_1(x) &= x \\ j_0(\rho) &= \frac{\sin\rho}{\rho} \\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} - \frac{\cos\rho}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.37)

On the LHS for $l'=0$ we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = 2 \frac{\sin{kr}}{kr}\end{aligned} \hspace{\stretch{1}}(2.41)

On the LHS for $l'=1$ note that

\begin{aligned}\int dx x e^{i k r x} &= \int dx x \frac{d}{dx} \frac{e^{i k r x}}{ikr} \\ &= x \frac{e^{i k r x}}{ikr} - \frac{e^{i k r x}}{(ikr)^2}.\end{aligned}

So, integration in $[-1,1]$ gives us

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = -2i \frac{\cos{kr}}{kr} + 2i \frac{1}{{(kr)^2}} \sin{kr}.\end{aligned} \hspace{\stretch{1}}(2.42)

Now compare to the RHS for $l'=0$, which is

\begin{aligned}2 j_0(kr) = 2 \frac{\sin{kr}}{kr},\end{aligned} \hspace{\stretch{1}}(2.43)

which matches 2.41. For $l'=1$ we have

\begin{aligned}2 i j_1(kr) = 2i \frac{1}{{kr}} \left( \frac{\sin{kr}}{kr} - \cos{kr} \right),\end{aligned} \hspace{\stretch{1}}(2.44)

which in turn matches 2.42, completing the exersize.

## Problem 3.

### Statement.

Obtain the commutation relations $\left[{L_i},{L_j}\right]$ by calculating the vector $\mathbf{L} \times \mathbf{L}$ using the definition $\mathbf{L} = \mathbf{r} \times \mathbf{p}$ directly instead of introducing a differential operator.

### Solution.

Expressing the product $\mathbf{L} \times \mathbf{L}$ in determinant form sheds some light on this question. That is

\begin{aligned}\begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ L_1 & L_2 & L_3 \\ L_1 & L_2 & L_3\end{vmatrix}&= \mathbf{e}_1 \left[{L_2},{L_3}\right] +\mathbf{e}_2 \left[{L_3},{L_1}\right] +\mathbf{e}_3 \left[{L_1},{L_2}\right]= \mathbf{e}_i \epsilon_{ijk} \left[{L_j},{L_k}\right]\end{aligned} \hspace{\stretch{1}}(2.45)

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using $L_i = \epsilon_{ijk} r_j p_k$ like so

\begin{aligned}\left[{L_i},{L_j}\right]&=\epsilon_{imn} r_m p_n \epsilon_{jab} r_a p_b- \epsilon_{jab} r_a p_b \epsilon_{imn} r_m p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (p_n r_a) p_b- \epsilon_{jab} \epsilon_{imn} r_a (p_b r_m) p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (r_a p_n -i \hbar \delta_{an}) p_b- \epsilon_{jab} \epsilon_{imn} r_a (r_m p_b - i \hbar \delta{mb}) p_n \\ &=\epsilon_{imn} \epsilon_{jab} (r_m r_a p_n p_b - r_a r_m p_b p_n )- i \hbar ( \epsilon_{imn} \epsilon_{jnb} r_m p_b - \epsilon_{jam} \epsilon_{imn} r_a p_n ).\end{aligned}

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

\begin{aligned}\left[{L_i},{L_j}\right]&=i \hbar ( \epsilon_{nim} \epsilon_{njb} r_m p_b - \epsilon_{mja} \epsilon_{min} r_a p_n ) \\ &=i \hbar ( (\delta_{ij} \delta_{mb} -\delta_{ib} \delta_{mj}) r_m p_b - (\delta_{ji} \delta_{an} -\delta_{jn} \delta_{ai}) r_a p_n ) \\ &=i \hbar (\delta_{ij} \delta_{mb} r_m p_b - \delta_{ji} \delta_{an} r_a p_n - \delta_{ib} \delta_{mj} r_m p_b + \delta_{jn} \delta_{ai} r_a p_n ) \\ &=i \hbar (\delta_{ij} r_m p_m- \delta_{ji} r_a p_a- r_j p_i+ r_i p_j ) \\ \end{aligned}

For $k \ne i,j$, this is $i\hbar (\mathbf{r} \times \mathbf{p})_k$, so we can write

\begin{aligned}\mathbf{L} \times \mathbf{L} &= i\hbar \mathbf{e}_k \epsilon_{kij} ( r_i p_j - r_j p_i ) = i\hbar \mathbf{L} = i\hbar \mathbf{e}_k L_k = i\hbar \mathbf{L}.\end{aligned} \hspace{\stretch{1}}(2.46)

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

\begin{aligned}\left[{L_i},{L_j}\right] &= i \hbar \epsilon_{ijk} L_k\end{aligned} \hspace{\stretch{1}}(2.47)

as here in Desai. Both say the same thing.

TODO.

## Problem 5.

### Statement.

A free particle is moving along a path of radius $R$. Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.

### Solution.

In classical mechanics our Lagrangian for this system is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m R^2 \dot{\theta}^2,\end{aligned} \hspace{\stretch{1}}(2.48)

with the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m R^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(2.49)

Thus the classical Hamiltonian is

\begin{aligned}H = \frac{1}{{2m R^2}} {p_\theta}^2.\end{aligned} \hspace{\stretch{1}}(2.50)

By analogy the QM Hamiltonian operator will therefore be

\begin{aligned}H = -\frac{\hbar^2}{2m R^2} \partial_{\theta\theta}.\end{aligned} \hspace{\stretch{1}}(2.51)

For $\Psi = \Theta(\theta) T(t)$, separation of variables gives us

\begin{aligned}-\frac{\hbar^2}{2m R^2} \frac{\Theta''}{\Theta} = i \hbar \frac{T'}{T} = E,\end{aligned} \hspace{\stretch{1}}(2.52)

from which we have

\begin{aligned}T &\propto e^{-i E t/\hbar} \\ \Theta &\propto e^{ \pm i \sqrt{2m E} R \theta/\hbar }.\end{aligned} \hspace{\stretch{1}}(2.53)

Requiring single valued $\Theta$, equal at any multiples of $2\pi$, we have

\begin{aligned}e^{ \pm i \sqrt{2m E} R (\theta + 2\pi)/\hbar } = e^{ \pm i \sqrt{2m E} R \theta/\hbar },\end{aligned}

or

\begin{aligned}\pm \sqrt{2m E} \frac{R}{\hbar} 2\pi = 2 \pi n,\end{aligned}

Suffixing the energy values with this index we have

\begin{aligned}E_n = \frac{n^2 \hbar^2}{2 m R^2}.\end{aligned} \hspace{\stretch{1}}(2.55)

Allowing both positive and negative integer values for $n$ we have

\begin{aligned}\Psi = \frac{1}{{\sqrt{2\pi}}} e^{i n \theta} e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.56)

where the normalization was a result of the use of a $[0,2\pi]$ inner product over the angles

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle \equiv \int_0^{2\pi} \psi^{*}(\theta) \phi(\theta) d\theta.\end{aligned} \hspace{\stretch{1}}(2.57)

## Problem 6.

### Statement.

Determine $\left[{L_i},{r}\right]$ and $\left[{L_i},{\mathbf{r}}\right]$.

### Solution.

Since $L_i$ contain only $\theta$ and $\phi$ partials, $\left[{L_i},{r}\right] = 0$. For the position vector, however, we have an angular dependence, and are left to evaluate $\left[{L_i},{\mathbf{r}}\right] = r \left[{L_i},{\hat{\mathbf{r}}}\right]$. We’ll need the partials for $\hat{\mathbf{r}}$. We have

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \phi} \\ I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(2.58)

Evaluating the partials we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} = \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}}\end{aligned}

With

\begin{aligned}\hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R \\ \hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R\end{aligned} \hspace{\stretch{1}}(2.61)

where $\tilde{R} R = 1$, and $\hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \hat{\mathbf{r}} = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 R = \tilde{R} \mathbf{e}_1 R = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.64)

For the $\phi$ partial we have

\begin{aligned}\partial_\phi \hat{\mathbf{r}}&= \mathbf{e}_3 \sin\theta I \hat{\boldsymbol{\phi}} \mathbf{e}_1 \mathbf{e}_2 \\ &= \sin\theta \hat{\boldsymbol{\phi}}\end{aligned}

We are now prepared to evaluate the commutators. Starting with the easiest we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right] \Psi&=-i \hbar (\partial_\phi \hat{\mathbf{r}} \Psi - \hat{\mathbf{r}} \partial_\phi \Psi ) \\ &=-i \hbar (\partial_\phi \hat{\mathbf{r}}) \Psi \\ \end{aligned}

So we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right]&=-i \hbar \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.65)

Observe that by virtue of chain rule, only the action of the partials on $\hat{\mathbf{r}}$ itself contributes, and all the partials applied to $\Psi$ cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of $L_x$, and $L_y$ are

\begin{aligned}L_x &= -i \hbar (-S_\phi \partial_\theta - C_\phi \cot\theta \partial_\phi) \\ L_y &= -i \hbar (C_\phi \partial_\theta - S_\phi \cot\theta \partial_\phi),\end{aligned} \hspace{\stretch{1}}(2.66)

where the sines and cosines are written with $S$, and $C$ respectively for short.

We therefore have

\begin{aligned}\left[{L_x},{\hat{\mathbf{r}}}\right]&= -i \hbar (-S_\phi (\partial_\theta \hat{\mathbf{r}}) - C_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}}) ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi \cot\theta S_\theta \hat{\boldsymbol{\phi}} ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi C_\theta \hat{\boldsymbol{\phi}} ) \\ \end{aligned}

and

\begin{aligned}\left[{L_y},{\hat{\mathbf{r}}}\right]&= -i \hbar (C_\phi (\partial_\theta \hat{\mathbf{r}}) - S_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}})) \\ &= -i \hbar (C_\phi \hat{\boldsymbol{\theta}} - S_\phi C_\theta \hat{\boldsymbol{\phi}} ).\end{aligned}

Adding back in the factor of $r$, and summarizing we have

\begin{aligned}\left[{L_i},{r}\right] &= 0 \\ \left[{L_x},{\mathbf{r}}\right] &= -i \hbar r (-\sin\phi \hat{\boldsymbol{\theta}} - \cos\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_y},{\mathbf{r}}\right] &= -i \hbar r (\cos\phi \hat{\boldsymbol{\theta}} - \sin\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_z},{\mathbf{r}}\right] &= -i \hbar r \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.68)

## Problem 7.

### Statement.

Show that

\begin{aligned}e^{-i\pi L_x /\hbar } {\lvert {l,m} \rangle} = {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.72)

TODO.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

## Desai Chapter II notes and problems.

Posted by peeterjoot on October 9, 2010

# Motivation.

Chapter II notes for [1].

# Notes

## Canonical Commutator

Based on the canonical relationship $[X,P] = i\hbar$, and $\left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x'-x)$, Desai determines the form of the $P$ operator in continuous space. A consequence of this is that the matrix element of the momentum operator is found to have a delta function specification

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right).\end{aligned}

In particular the matrix element associated with the state ${\lvert {\phi} \rangle}$ is found to be

\begin{aligned}{\langle {x'} \rvert} P {\lvert {\phi} \rangle} = -i \hbar \frac{d}{dx'} \phi(x').\end{aligned}

Compare this to [2], where this last is taken as the definition of the momentum operator, and the relationship to the delta function is not spelled out explicitly. This canonical commutator approach, while more abstract, seems to have less black magic involved in the setup. We do require the commutator relationship $[X,P] = i\hbar$ to be pulled out of a magic hat, but at least the magic show is a structured one based on a small set of core assumptions.

It will likely be good to come back to this later when trying to reconcile this new (for me) Dirac notation with the more basic notation I’m already comfortable with. When trying to compare the two, it will be good to note that there is a matrix element that is implied in the more old fashioned treatment in a book such as [3].

There is one fundamental assumption that appears to be made in this section that isn’t justified by anything except the end result. That is the assumption that $P$ is a derivative like operator, acting with a product rule action. That’s used to obtain (2.28) and is a fairly black magic operation. This same assumption, is also hiding, somewhat sneakily, in the manipulation for (2.44).

If one has to make that assumption that $P$ is a derivative like operator, I don’t feel this method of introducing it is any less arbitrary seeming. It is still pulled out of a magic hat, only because the answer is known ahead of time. The approach of [3], where the derivative nature is presented as consequence of transforming (via Fourier transforms) from the position to the momentum representation, seems much more intuitive and less arbitrary.

## Generalized momentum commutator.

It is stated that

\begin{aligned}[P,X^n] = - n i \hbar X^{n-1}.\end{aligned}

Let’s prove this. The $n=1$ case is the canonical commutator, which is assumed. Is there any good way to justify that from first principles, as presented in the text? We have to prove this for $n$, given the relationship for $n-1$. Expanding the $n$th power commutator we have

\begin{aligned}[P,X^n] &= P X^n - X^n P \\ &= P X^{n-1} X - X^{n } P \\ \end{aligned}

Rearranging the $n-1$ result we have

\begin{aligned}P X^{n-1} = X^{n-1} P - (n-1) i \hbar X^{n-2},\end{aligned}

and can insert that in our $[P,X^n]$ expansion for

\begin{aligned}[P,X^n] &= \left( X^{n-1} P - (n-1) i \hbar X^{n-2} \right)X - X^{n } P \\ &= X^{n-1} (PX) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= X^{n-1} ( X P - i\hbar) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= -X^{n-1} i\hbar - (n-1) i \hbar X^{n-1} \\ &= -n i \hbar X^{n-1} \qquad\square\end{aligned}

## Uncertainty principle.

The origin of the statement $[\Delta A, \Delta B] = [A, B]$ is not something that seemed obvious. Expanding this out however is straightforward, and clarifies things. That is

\begin{aligned}[\Delta A, \Delta B] &= (A - \left\langle{{A}}\right\rangle) (B - \left\langle{{B}}\right\rangle) - (B - \left\langle{{B}}\right\rangle) (A - \left\langle{{A}}\right\rangle) \\ &= \left( A B - \left\langle{{A}}\right\rangle B - \left\langle{{B}}\right\rangle A +\left\langle{{A}}\right\rangle \left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangle A - \left\langle{{A}}\right\rangle B +\left\langle{{B}}\right\rangle \left\langle{{A}}\right\rangle \right) \\ &= A B - B A \\ &= [A, B]\qquad\square\end{aligned}

## Size of a particle

I found it curious that using $\Delta x \Delta p \approx \hbar$ instead of $\Delta x \Delta p \ge \hbar/2$, was sufficient to obtain the hydrogen ground state energy $E_{\text{min}} = -e^2/2 a_0$, without also having to do any factor of two fudging.

## Space displacement operator.

### Initial notes.

I’d be curious to know if others find the loose use of equality for approximation after approximation slightly disturbing too?

I also find it curious that (2.140) is written

\begin{aligned}D(x) = \exp\left( -i \frac{P}{\hbar} x \right),\end{aligned}

and not

\begin{aligned}D(x) = \exp\left( -i x \frac{P}{\hbar} \right).\end{aligned}

Is this intentional? It doesn’t seem like $P$ ought to be acting on $x$ in this case, so why order the terms that way?

Expanding the application of this operator, or at least its first order Taylor series, is helpful to get an idea about this. Doing so, with the original $\Delta x'$ value used in the derivation of the text we have to start

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle} &\approx \left(1 - i \frac{P}{\hbar} \Delta x' \right) {\lvert {\phi} \rangle} \\ &= \left(1 - i \left( -i \hbar \delta(x -x') \frac{\partial}{\partial x} \right) \frac{1}{{\hbar}} \Delta x'\right) {\lvert {\phi} \rangle} \\ \end{aligned}

This shows that the $\Delta x$ factor can be commuted with the momentum operator, as it is not a function of $x'$, so the question of $P x$, vs $x P$ above appears to be a non-issue.

Regardless of that conclusion, it seems worthy to continue an attempt at expanding this shift operator action on the state vector. Let’s do so, but do so by computing the matrix element ${\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}$. That is

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle} &\approx\left\langle{{x'}} \vert {{\phi}}\right\rangle - {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {\phi} \rangle} \\ &=\phi(x') - \int {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \int \delta(x -x') \frac{\partial}{\partial x} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \left\langle{{x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \phi(x') \\ \end{aligned}

This is consistent with the text. It is interesting, and initially surprising that the space displacement operator when applied to a state vector introduces a negative shift in the wave function associated with that state vector. In the derivation of the text, this was associated with the use of integration by parts (ie: due to the sign change in that integration). Here we see it sneak back in, due to the $i^2$ once the momentum operator is expanded completely.

As last note and question. The first order Taylor approximation of the momentum operator was used. If the higher order terms are retained, as in

\begin{aligned}\exp\left( -i \Delta x' \frac{P}{\hbar} \right) = 1 - \Delta x' \delta(x -x') \frac{\partial}{\partial x} + \frac{1}{{2}} \left( - \Delta x' \delta(x -x') \frac{\partial}{\partial x} \right)^2 + \cdots,\end{aligned}

then how does one evaluate a squared delta function (or Nth power)?

Talked to Vatche about this after class. The key to this is sequential evaluation. Considering the simple case for $P^2$, we evaluate one operator at a time, and never actually square the delta function

\begin{aligned}{\langle {x'} \rvert} P^2 {\lvert {\phi} \rangle} \end{aligned}

I was also questioned why I was including the delta function at this point. Why would I do that. Thinking further on this, I see that isn’t a reasonable thing to do. That delta function only comes into the mix when one takes the matrix element of the momentum operator as in

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x-x') \frac{d}{dx'}. \end{aligned}

This is very much like the fact that the delta function only shows up in the continuous representation in other context where one has matrix elements. The most simple example of which is just

\begin{aligned}\left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x-x').\end{aligned}

I also see now that the momentum operator is directly identified with the derivative (no delta function) in two other places in the text. These are equations (2.32) and (2.46) respectively:

\begin{aligned}P(x) &= -i \hbar \frac{d}{dx} \\ P &= -i \hbar \frac{d}{dX}.\end{aligned}

In the first, (2.32), I thought the $P(x)$ was somehow different, just a helpful expression found along the way, but now it occurs to me that this was intended to be an unambiguous representation of the momentum operator itself.

### A second try.

Getting a feel for this Dirac notation takes a bit of adjustment. Let’s try evaluating the matrix element for the space displacement operator again, without abusing the notation, or thinking that we have a requirement for squared delta functions and other weirdness. We start with

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle}&=e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {\phi} \rangle} \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}\left\langle{{x}} \vert {{\phi}}\right\rangle \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle} \phi(x).\end{aligned}

Now, to evaluate $e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}$, we can expand in series

\begin{aligned}e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}&={\lvert {x} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k P^k {\lvert {x} \rangle}.\end{aligned}

It is tempting to left multiply by ${\langle {x'} \rvert}$ and commute that past the $P^k$, then write $P^k = -i \hbar d/dx$. That probably produces the correct result, but is abusive of the notation. We can still left multiply by ${\langle {x'} \rvert}$, but to be proper, I think we have to leave that on the left of the $P^k$ operator. This yields

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\int dx \left( \left\langle{{x'}} \vert {{x}}\right\rangle + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k {\langle {x'} \rvert} P^k {\lvert {x} \rangle}\right) \phi(x) \\ &=\int dx \delta(x'- x) \phi(x)+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k \int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x).\end{aligned}

The first integral is just $\phi(x')$, and we are left with integrating the higher power momentum matrix elements, applied to the wave function $\phi(x)$. We can proceed iteratively to expand those integrals

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} {\langle {x''} \rvert} P {\lvert {x} \rangle} \phi(x) \\ \end{aligned}

Now we have a matrix element that we know what to do with. Namely, ${\langle {x''} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x''-x) {\partial {}}/{\partial {x}}$, which yields

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= -i \hbar \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} \delta(x''-x) \frac{\partial {}}{\partial {x}} \phi(x) \\ &= -i \hbar \int dx {\langle {x'} \rvert} P^{k-1} {\lvert {x} \rangle} \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

Each similar application of the identity operator brings down another $-i\hbar$ and derivative yielding

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k}.\end{aligned}

Going back to our displacement operator matrix element, we now have

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\phi(x')+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k} \\ &=\phi(x') +\sum_{k=1}^\infty \frac{1}{{k!}} \left( - \Delta x' \frac{\partial }{\partial x'} \right)^k \phi(x') \\ &= \phi(x' - \Delta x').\end{aligned}

This shows nicely why the sign goes negative and it is no longer surprising when one observes that this can be obtained directly by using the adjoint relationship

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=(D^\dagger(\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &=(D(-\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &={\lvert {x' - \Delta x'} \rangle}^\dagger {\lvert {\phi} \rangle} \\ &=\left\langle{{x' - \Delta x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x' - \Delta x')\end{aligned}

That’s a whole lot easier than the integral manipulation, but at least shows that we now have a feel for the notation, and have confirmed the exponential formulation of the operator nicely.

## Time evolution operator

The phrase “we identify time evolution with the Hamiltonian”. What a magic hat maneuver! Is there a way that this would be logical without already knowing the answer?

## Dispersion delta function representation.

The Principle part notation here I found a bit unclear. He writes

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{(x'-x)}{(x'-x)^2 + \epsilon^2}= P\left( \frac{1}{{x' - x}} \right).\end{aligned}

In complex variables the principle part is the negative power series terms. For example for $f(z) = \sum a_k z^k$, the principle part is

\begin{aligned}\sum_{k = -\infty}^{-1} a_k z^k\end{aligned}

This doesn’t vanish at $z = 0$ as the principle part in this section is stated to. In (2.202) he pulls the $P$ out of the integral, but I think the intention is really to keep this associated with the $1/(x'-x)$, as in

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\pi}} \int_0^\infty dx' \frac{f(x')}{x'-x - i \epsilon}= \frac{1}{{\pi}} \int_0^\infty dx' f(x') P\left( \frac{1}{{x' - x}} \right) + i f(x)\end{aligned}

Will this even have any relevance in this text?

# Problems.

## 1. Cauchy-Schwartz identity.

We wish to find the value of $\lambda$ that is just right to come up with the desired identity. The starting point is the expansion of the inner product

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

There is a trial and error approach to this problem, where one magically picks $\lambda \propto \left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle^n$, and figures out the proportionality constant and scale factor for the denominator to do the job. A nicer way is to set up the problem as an extreme value exercise. We can write this inner product as a function of $\lambda$, and proceed with setting the derivative equal to zero

\begin{aligned}f(\lambda) =\left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

Its derivative is

\begin{aligned}\frac{df}{d\lambda} &=\left(\lambda^{*} + \lambda \frac{d\lambda^{*}}{d\lambda}\right) \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle + \frac{d\lambda^{*}}{d\lambda} \left\langle{{b}} \vert {{a}}\right\rangle \\ &=\lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle +\frac{d\lambda^{*}}{d\lambda} \Bigl( \lambda \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{b}} \vert {{a}}\right\rangle \Bigr)\end{aligned}

Now, we have a bit of a problem with $d\lambda^{*}/d\lambda$, since that doesn’t actually exist. However, that problem can be side stepped if we insist that the factor that multiplies it is zero. That provides a value for $\lambda$ that also kills of the remainder of $df/d\lambda$. That value is

\begin{aligned}\lambda = - \frac{\left\langle{{b}} \vert {{a}}\right\rangle }{ \left\langle{{b}} \vert {{b}}\right\rangle }.\end{aligned}

Back substitution yields

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle - \left\langle{{a}} \vert {{b}}\right\rangle\left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle \ge 0.\end{aligned}

This is easily rearranged to obtain the desired result:

\begin{aligned}\left\langle{{a}} \vert {{a}}\right\rangle \left\langle{{b}} \vert {{b}}\right\rangle \ge \left\langle{{b}} \vert {{a}}\right\rangle\left\langle{{a}} \vert {{b}}\right\rangle.\end{aligned}

## 2. Uncertainty relation.

### The problem.

Using the Schwarz inequality of problem 1, and a symmetric and antisymmetric (anticommutator and commutator) sum of products that

\begin{aligned}{\left\lvert{\Delta A \Delta B}\right\rvert}^2 \ge \frac{1}{{4}}{\left\lvert{ \left[{A},{B}\right]}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.1)

and that this result implies

\begin{aligned}\Delta x \Delta p \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.2)

### The solution.

This problem seems somewhat misleading, since the Schwarz inequality appears to have nothing to do with showing 3.1, but only with the split of the operator product into symmetric and antisymmetric parts. Another possible tricky thing about this problem is that there is no mention of the anticommutator in the text at this point that I can find, so if one does not know what it is defined as, it must be figured out by context.

I’ve also had an interpretation problem with this since $\Delta x \Delta p$ in 3.2 cannot mean the operators as is the case of 3.1. My assumption is that in 3.2 these deltas are really absolute expectation values, and that we really want to show

\begin{aligned}{\left\lvert{\left\langle{{\Delta X}}\right\rangle}\right\rvert} {\left\lvert{\left\langle{{\Delta P}}\right\rangle}\right\rvert} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.3)

However, I’m unable to demonstrate this. Instead I’m able to show two things:

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4} \\ {\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}\end{aligned}

Is one of these the result to be shown? Note that only the first of these required the Schwarz inequality. Also, it seems strange that we want the expectation of the operator $\Delta X\Delta P$?

Starting with the first part of the problem, note that we can factor any operator product into a linear combination of two Hermitian operators using the commutator and anticommutator. That is

\begin{aligned}C D &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2}}\left( C D - D C\right) \\ &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2i}}\left( C D - D C\right) i \\ &\equiv \frac{1}{{2}}\left\{{C},{D}\right\}+\frac{1}{{2i}} \left[{C},{D}\right] i\end{aligned}

For Hermitian operators $C$, and $D$, using $(CD)^\dagger = D^\dagger C^\dagger = D C$, we can show that the two operator factors are Hermitian,

\begin{aligned}\left(\frac{1}{{2}}\left\{{C},{D}\right\}\right)^\dagger&= \frac{1}{{2}}\left( C D + D C\right)^\dagger \\ &= \frac{1}{{2}}\left( D^\dagger C^\dagger + C^\dagger D^\dagger\right) \\ &= \frac{1}{{2}}\left( D C + C D \right) \\ &= \frac{1}{{2}}\left\{{C},{D}\right\},\end{aligned}

\begin{aligned}\left(\frac{1}{{2}}\left[{C},{D}\right] i\right)^\dagger&= -\frac{i}{2} \left( C D - D C\right)^\dagger \\ &= -\frac{i}{2}\left( D^\dagger C^\dagger - C^\dagger D^\dagger\right) \\ &= -\frac{i}{2}\left( D C - C D \right) \\ &=\frac{1}{{2}}\left[{C},{D}\right] i\end{aligned}

So for the absolute squared value of the expectation of product of two operators we have

\begin{aligned}\left\langle{{C D }}\right\rangle^2&={\left\lvert{\left\langle{{\frac{1}{{2}}\left\{{C},{D}\right\} +\frac{1}{{2i}} \left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &={\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2.\end{aligned}

Now, these expectation values are real, given the fact that these operators are Hermitian. Suppose we write $a = \left\langle{{\left\{{C},{D}\right\}}}\right\rangle/2$, and $b = \left\langle{{\left[{C},{D}\right]i}}\right\rangle/2$, then we have

\begin{aligned}{\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2&={\left\lvert{ a - b i }\right\rvert}^2 \\ &=( a - b i ) ( a + b i ) \\ &=a^2 + b^2\end{aligned}

So we have for the squared expectation value of the operator product $C D$

\begin{aligned}\left\langle{{C D }}\right\rangle^2 &=\frac{1}{{4}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle^2 +\frac{1}{{4}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2 \\ &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

With $C = \Delta A$, and $D = \Delta B$, this almost completes the first part of the problem. The remaining thing to note is that $\left[{\Delta A},{\Delta B}\right] = \left[{A},{B}\right]$. This last is straight forward to show

\begin{aligned}\left[{\Delta A},{\Delta B}\right] &=\left[{A - \left\langle{{A}}\right\rangle},{B - \left\langle{{B}}\right\rangle}\right] \\ &=(A - \left\langle{{A}}\right\rangle)(B - \left\langle{{B}}\right\rangle)-(B - \left\langle{{B}}\right\rangle)(A - \left\langle{{A}}\right\rangle) \\ &=\left( A B - \left\langle{{A}}\right\rangleB- \left\langle{{B}}\right\rangleA+ \left\langle{{A}}\right\rangle\left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangleA- \left\langle{{A}}\right\rangleB+ \left\langle{{B}}\right\rangle\left\langle{{A}}\right\rangle \right) \\ &=A B - B A \\ &=\left[{A},{B}\right].\end{aligned}

Putting the pieces together we have

\begin{aligned}\left\langle{{\Delta A \Delta B }}\right\rangle^2 &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{A},{B}\right]}}\right\rangle}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.4)

With expectation value implied by the absolute squared, this reproduces relation 3.1 as desired.

For the remaining part of the problem, with ${\lvert {\alpha} \rangle} = \Delta A {\lvert {\psi} \rangle}$, and ${\lvert {\beta} \rangle} = \Delta B {\lvert {\psi} \rangle}$, and noting that $(\Delta A)^\dagger = \Delta A$ for Hermitian operator $A$ (or $B$ too in this case), the Schwartz inequality

\begin{aligned}\left\langle{{\alpha}} \vert {{\alpha}}\right\rangle\left\langle{{\beta}} \vert {{\beta}}\right\rangle &\ge {\left\lvert{\left\langle{{\beta}} \vert {{\alpha}}\right\rangle}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.5)

takes the following form

\begin{aligned}{\langle {\psi} \rvert}(\Delta A)^\dagger \Delta A {\lvert {\psi} \rangle} {\langle {\psi} \rvert}(\Delta B)^\dagger B {\lvert {\psi} \rangle} &\ge {\left\lvert{{\langle {\psi} \rvert} (\Delta B)^\dagger A {\lvert {\psi} \rangle}}\right\rvert}^2.\end{aligned}

These are expectation values, and allow us to use 3.4 to show

\begin{aligned}\left\langle{{(\Delta A)^2 }}\right\rangle \left\langle{{(\Delta B)^2 }}\right\rangle&\ge {\left\lvert{ \left\langle{{\Delta B \Delta A }}\right\rangle }\right\rvert}^2 \\ &= \frac{1}{{4}} {\left\lvert{\left\langle{{\left[{B},{A}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

For $A = X$, and $B = P$, this is

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4}\end{aligned} \hspace{\stretch{1}}(3.6)

Hmm. This doesn’t look like it is quite the result that I expected? We have $\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle$ instead of $\left\langle{{\Delta X }}\right\rangle^2 \left\langle{{\Delta P}}\right\rangle^2$?

Let’s step back slightly. Without introducing the Schwarz inequality the result 3.4 of the commutator manipulation, and $\left[{X},{P}\right] = i \hbar$ gives us

\begin{aligned}\left\langle{{\Delta X \Delta P }}\right\rangle^2 &\ge\frac{\hbar^2}{4} ,\end{aligned}

and taking roots we have

\begin{aligned}{\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.7)

Is this really what we were intended to show?

Attempting to answer this myself, I refer to [2], where I find he uses a loose notation for this too, and writes in his equation 3.36

\begin{aligned}(\Delta C)^2 = \left\langle{{ (C - \expectation{C})^2 }}\right\rangle = \left\langle{{C^2}}\right\rangle - \left\langle{{C}}\right\rangle^2\end{aligned}

This usage seems consistent with that, so I think that it is a reasonable assumption that uncertainty relation $\Delta x \Delta p \ge \hbar/2$ is really shorthand notation for the more cumbersome relation involving roots of the expectations of mean-square deviation operators

\begin{aligned}\sqrt{\left\langle{{ (X - \expectation{X})^2 }}\right\rangle}\sqrt{\left\langle{{ (P - \expectation{P})^2 }}\right\rangle} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.8)

This is in fact what was proved arriving at 3.6.

Ah ha! Found it. Referring to equation 2.93 in the text, I see that a lower case notation $\Delta x = \sqrt{(\Delta X)^2}$, was introduced. This explains what seemed like ambiguous notation … it was just tricky notation, perfectly well explained, but done in passing in the text in a somewhat hidden seeming way.

## 3.

This problem done by inspection.

TODO.

## 5. Hermitian radial differential operator.

Show that the operator

\begin{aligned}R = -i \hbar \frac{\partial {}}{\partial {r}},\end{aligned}

is not Hermitian, and find the constant $a$ so that

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right),\end{aligned}

is Hermitian.

For the first part of the problem we can show that

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*} \ne {\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle}.\end{aligned}

For the RHS we have

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}}\end{aligned}

and for the LHS we have

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \frac{\partial {\hat{\boldsymbol{\phi}}^{*}}}{\partial {r}} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( 2 r \hat{\boldsymbol{\psi}} + r^2 \frac{\partial {r}}{\partial {\hat{\boldsymbol{\psi}}}} \right)\hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

So, unless $r\hat{\boldsymbol{\psi}} = 0$, the operator $R$ is not Hermitian.

Moving on to finding the constant $a$ such that $T$ is Hermitian we calculate

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} T {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right) \hat{\boldsymbol{\phi}}^{*} \\ &= i \hbar \iiint dr d\theta d\phi \sin\theta \hat{\boldsymbol{\psi}} \left( r^2 \frac{\partial {}}{\partial {r}} + a r \right) \hat{\boldsymbol{\phi}}^{*} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + 2 r \hat{\boldsymbol{\psi}} - a r \hat{\boldsymbol{\psi}} \right) \hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

and

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} T {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + a r \hat{\boldsymbol{\psi}} \right)\end{aligned}

So, for $T$ to be Hermitian, we require

\begin{aligned}2 r - a r = a r.\end{aligned}

So $a = 1$, and our Hermitian operator is

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{r} \right).\end{aligned}

## 6. Radial directional derivative operator.

### Problem.

Show that

\begin{aligned}D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p},\end{aligned}

is Hermitian. Expand this operator in spherical coordinates. Compare result to problem 5.

### Solution.

Tackling the spherical coordinates expression of of the operator $D$, we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \boldsymbol{\nabla} \right) \Psi \\ &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + \left( \boldsymbol{\nabla} \Psi \right) \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \left(\boldsymbol{\nabla} \Psi\right) \\ &=\left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + 2 \hat{\mathbf{r}} \cdot \left( \boldsymbol{\nabla} \Psi \right).\end{aligned}

Here braces have been used to denote the extend of the operation of the gradient. In spherical polar coordinates, our gradient is

\begin{aligned}\boldsymbol{\nabla} \equiv \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}}+\hat{\boldsymbol{\theta}} \frac{1}{{r}} \frac{\partial {}}{\partial {\theta}}+\hat{\boldsymbol{\phi}} \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

This gets us most of the way there, and we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &=2 \frac{\partial {\Psi}}{\partial {r}} + \left( \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {r}}+\frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}+\frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\right) \Psi.\end{aligned}

Since ${\partial {\hat{\mathbf{r}}}}/{\partial {r}} = 0$, we are left with evaluating $\hat{\boldsymbol{\theta}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\theta}}$, and $\hat{\boldsymbol{\phi}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\phi}}$. To do so I chose to employ the (Geometric Algebra) exponential form of the spherical unit vectors [4]

\begin{aligned}I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_{2} \exp( I \mathbf{e}_3 \phi ) \\ \hat{\mathbf{r}} &= \mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta ) \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ).\end{aligned}

The partials of interest are then

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} &= \mathbf{e}_3 I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ) = \hat{\boldsymbol{\theta}},\end{aligned}

and

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} &= \frac{\partial {}}{\partial {\phi}} \mathbf{e}_3 \left( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta \right) \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= \sin\theta \hat{\boldsymbol{\phi}}.\end{aligned}

Only after computing these, did I find exactly these results for the partials of interest, in mathworld’s Spherical Coordinates page, which confirms these calculations. Note that a different angle convention is used there, so one has to exchange $\phi$, and $\theta$ and the corresponding unit vector labels.

Substitution back into our expression for the operator we have

\begin{aligned}D &= - 2 i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{{r}} \right),\end{aligned}

an operator that is exactly twice the operator of problem 5, already shown to be Hermitian. Since the constant numerical scaling of a Hermitian operator leaves it Hermitian, this shows that $D$ is Hermitian as expected.

### $\hat{\boldsymbol{\theta}}$ directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\theta}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}}.\end{aligned}

This time we need the ${\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\theta}}$, ${\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\phi}}$ partials, which are

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}} &=\mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=-\mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=- \hat{\mathbf{r}}.\end{aligned}

This has no $\hat{\boldsymbol{\theta}}$ component, so does not contribute to $\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}}$. Noting that

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} &= -\mathbf{e}_1 \exp( I \mathbf{e}_3 \phi ) = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}},\end{aligned}

the $\phi$ partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\mathbf{e}_1 \mathbf{e}_2 \left( \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta )+\hat{\boldsymbol{\phi}} I \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \right) \\ &=\hat{\boldsymbol{\phi}} \left( \exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}}\right),\end{aligned}

with $\hat{\boldsymbol{\phi}}$ component

\begin{aligned}\hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\left\langle{{\exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}} }}\right\rangle \\ &=\cos\theta + \mathbf{e}_3 \cdot \hat{\boldsymbol{\phi}} \sin\theta \\ &=\cos\theta.\end{aligned}

Assembling the results, and labeling this operator $\Theta$ we have

\begin{aligned}\Theta &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \mathbf{p} \right) \\ &=-i \hbar \frac{1}{{r}} \left( \frac{\partial {}}{\partial {\theta}} + \frac{1}{{2}} \cot\theta \right).\end{aligned}

It would be reasonable to expect this operator to also be Hermitian, and checking this explicitly by comparing
${\langle {\Phi} \rvert} \Theta {\lvert {\Psi} \rangle}^{*}$ and ${\langle {\Psi} \rvert} \Theta {\lvert {\Phi} \rangle}$, shows that this is in fact the case.

### $\hat{\boldsymbol{\phi}}$ directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\phi}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}}.\end{aligned}

This time we need the ${\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\theta}}$, ${\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\phi}} = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}}$ partials. The $\theta$ partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}} &=\frac{\partial {}}{\partial {\theta}} \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= 0.\end{aligned}

We conclude that $\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} = 0$, and expect that we have one more Hermitian operator

\begin{aligned}\Phi &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \mathbf{p} \right) \\ &=-i \hbar \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

It is simple to confirm that this is Hermitian since the integration by parts does not involve any of the volume element. In fact, any operator $-i\hbar f(r,\theta) {\partial {}}/{\partial {\phi}}$ would also be Hermitian, including the simplest case $-i\hbar {\partial {}}/{\partial {\phi}}$. Have to dig out my Bohm text again, since I seem to recall that one used in the spherical Harmonics chapter.

### A note on the Hermitian test and Dirac notation.

I’ve been a bit loose with my notation. I’ve stated that my demonstrations of the Hermitian nature have been done by showing

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} - {\langle {\psi} \rvert} A {\lvert {\phi} \rangle} = 0.\end{aligned}

However, what I’ve actually done is show that

\begin{aligned}\left( \int d^3 \mathbf{x} \phi^{*} (\mathbf{x}) A(\mathbf{x}) \psi(\mathbf{x}) \right)^{*} - \int d^3 \mathbf{x} \psi^{*} (\mathbf{x}) A(\mathbf{x}) \phi(\mathbf{x}) = 0.\end{aligned}

To justify this note that

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\left( \iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\phi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\psi}}\right\rangle \right)^{*} \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \phi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A^{*}(\mathbf{s}) \psi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \phi(\mathbf{r}) A^{*}(\mathbf{r}) \psi(\mathbf{r}),\end{aligned}

and

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\psi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\phi}}\right\rangle \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} {\langle {\mathbf{r}} \rvert} \psi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A(\mathbf{s}) \phi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \psi(\mathbf{r}) A(\mathbf{r}) \phi(\mathbf{r}).\end{aligned}

Working backwards one sees that the comparison of the wave function integrals in explicit inner product notation is sufficient to demonstrate the Hermitian property.

## 7. Some commutators.

### 7. Problem.

For $D$ in problem 6, obtain

\begin{itemize}
\item i) $[D, x_i]$
\item ii) $[D, p_i]$
\item iii) $[D, L_i]$, where $L_i = \mathbf{e}_i \cdot (\mathbf{r} \times \mathbf{p})$.
\item iv) Show that $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} = e^\alpha x_i$
\end{itemize}

### 7. Expansion of $\left[{D},{x_i}\right]$.

While expressing the operator as $D = -2 i \hbar (1/r) (1 + \partial_r)$ has less complexity than the $D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p}$, since no operation on $\hat{\mathbf{r}}$ is required, this doesn’t look particularly convenient for use with Cartesian coordinates. Slightly better perhaps is

\begin{aligned}D = -2 i\hbar \frac{1}{{r}}( \mathbf{r} \cdot \boldsymbol{\nabla} + 1)\end{aligned}

\begin{aligned}[D, x_i] \Psi&=D x_i \Psi - x_i D \Psi \\ &=-2 i \hbar \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot (\boldsymbol{\nabla} x_i) \Psi-2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \mathbf{e}_i \Psi.\end{aligned}

So this first commutator is:

\begin{aligned}[D, x_i] = -2 i \hbar \frac{x_i}{r}.\end{aligned}

### 7. Alternate expansion of $\left[{D},{x_i}\right]$.

Let’s try this instead completely in coordinate notation to verify. I’ll use implicit summation for repeated indexes, and write $\partial_k = \partial/\partial x_k$. A few intermediate results will be required

\begin{aligned}\partial_k \frac{1}{{r}} &= \partial_k (x_m x_m)^{-1/2} \\ &= -\frac{1}{{2}} 2 x_k (x_m x_m)^{-3/2} \\ \end{aligned}

Or

\begin{aligned}\partial_k \frac{1}{{r}} &= - \frac{x_k}{r^3}\end{aligned} \hspace{\stretch{1}}(3.9)

\begin{aligned}\partial_k \frac{x_i}{r}&=\frac{\delta_{ik}}{r} - \frac{ x_i }{r^3}\end{aligned} \hspace{\stretch{1}}(3.10)

\begin{aligned}\partial_k \frac{x_k}{r}&=\frac{3}{r} - \frac{ x_k }{r^3}\end{aligned} \hspace{\stretch{1}}(3.11)

The action of the momentum operators on the coordinates is

\begin{aligned}p_k x_i \Psi &=-i \hbar \partial_k x_i \Psi \\ &=-i \hbar \left( \delta_{ik} + x_i \partial_k \right) \Psi \\ &=-i \hbar \delta_{ik} + x_i p_k\end{aligned}

\begin{aligned}p_k x_k \Psi &=-i \hbar \partial_k x_k \Psi \\ &=-i \hbar \left( 3 + x_k \partial_k \right) \Psi\end{aligned}

Or

\begin{aligned}p_k x_i &= -i \hbar \delta_{ik} + x_i p_k \\ p_k x_k &= - 3 i \hbar + x_k p_k \end{aligned} \hspace{\stretch{1}}(3.12)

And finally

\begin{aligned}p_k \frac{1}{{r}} \Psi&=(p_k \frac{1}{{r}}) \Psi+ \frac{1}{{r}} p_k \Psi \\ &=-i \hbar \left( -\frac{x_k}{r^3}\right) \Psi+ \frac{1}{{r}} p_k \Psi \\ \end{aligned}

So

\begin{aligned}p_k \frac{1}{{r}} &= i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k\end{aligned} \hspace{\stretch{1}}(3.14)

We can use these to rewrite $D$

\begin{aligned}D &= p_k \frac{x_k}{r} + \frac{x_k}{r} p_k \\ &= p_k x_k \frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= \left( - 3 i \hbar + x_k p_k \right)\frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= - \frac{3 i \hbar}{r} + x_k \left( i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k \right) + \frac{x_k}{r} p_k \\ \end{aligned}

\begin{aligned}D &= \frac{2}{r} ( -i \hbar + x_k p_k )\end{aligned} \hspace{\stretch{1}}(3.15)

This leaves us in the position to compute the commutator

\begin{aligned}\left[{D},{x_i}\right]&= \frac{2}{r} ( -i \hbar + x_k p_k ) x_i- \frac{2 x_i}{r} ( -i \hbar + x_k p_k ) \\ &= \frac{2}{r} x_k ( -i \hbar \delta_{ik} + x_i p_k )- \frac{2 x_i}{r} x_k p_k \\ &= -\frac{2 i \hbar x_i}{r} \end{aligned}

So, unless I’m doing something fundamentally wrong, the same way in both methods, this appears to be the desired result. I question my answer since utilizing this for the later computation of $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}$ did not yield the expected answer.

### 7. $[D, p_i]$

\begin{aligned}\left[{D},{p_i}\right] &=-\frac{2 i \hbar }{r} ( 1 + x_k p_k ) p_i+2 i \hbar p_i \frac{1}{{r}} ( 1 + x_k p_k ) \\ &=-\frac{2 i \hbar }{r} \left(p_i + x_k p_k p_i-\left( i \hbar \frac{x_i}{r^2} + p_i \right) ( 1 + x_k p_k ) \right) \\ &=-\frac{2 i \hbar }{r} \left(x_k p_k p_i- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k -\left( -i \hbar \delta_{ki} + x_k p_i \right) p_k \right) \\ &=-\frac{2 i \hbar }{r} \left(- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k + i \hbar p_i\right) \\ &=-\frac{i \hbar}{r} \left( \frac{x_i}{r} D+ 2 i \hbar p_i\right) \qquad\square\end{aligned}

If there is some significance to this expansion, other than to get a feel for operator manipulation, it escapes me.

### 7. $[D, L_i]$

To expand $[D, L_i]$, it will be sufficient to consider any specific index $i \in \{1,2,3\}$ and then utilize cyclic permutation of the indexes in the result to generalize. Let’s pick $i=1$, for which we have

\begin{aligned}L_1 = x_2 p_3 - x_3 p_2 \end{aligned}

It appears we will want to know

\begin{aligned}p_m D &=-2 i \hbar p_m \frac{1}{{r}} ( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m\right)( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m+i \hbar \frac{x_m x_k }{r^3} p_k + \frac{1}{{r}}p_m x_k p_k \right) \\ &=-\frac{2 i \hbar}{r} \left(i \hbar \frac{x_m}{r^2} + p_m+i \hbar \frac{x_m x_k }{r^2} p_k -i \hbar p_m + x_k p_m p_k \right)\end{aligned}

and we also want

\begin{aligned}D x_m &=- \frac{2 i \hbar }{r} ( 1 + x_k p_k ) x_m \\ &=- \frac{2 i \hbar }{r} ( x_m + x_k ( -i \hbar \delta_{km} + x_m p_k ) ) \\ &=- \frac{2 i \hbar }{r} ( x_m - i \hbar x_m + x_m x_k p_k ) \\ \end{aligned}

This also happens to be $D x_m = x_m D + \frac{2 (i \hbar)^2 x_m }{r}$, but does that help at all?

Assembling these we have

\begin{aligned}\left[{D},{L_1}\right]&=D x_2 p_3 - D x_3 p_2 - x_2 p_3 D + x_3 p_2 D \\ &=- \frac{2 i \hbar }{r} ( x_2 - i \hbar x_2 + x_2 x_k p_k ) p_3+ \frac{2 i \hbar }{r} ( x_3 - i \hbar x_3 + x_3 x_k p_k ) p_2 \\ &+\frac{2 i \hbar x_2 }{r} \left(i \hbar \frac{x_3}{r^2} + p_3+i \hbar \frac{x_3 x_k }{r^2} p_k -i \hbar p_3 + x_k p_3 p_k \right) \\ &-\frac{2 i \hbar x_3 }{r} \left(i \hbar \frac{x_2}{r^2} + p_2+i \hbar \frac{x_2 x_k }{r^2} p_k -i \hbar p_2 + x_k p_2 p_k \right) \\ \end{aligned}

With a bit of brute force it is simple enough to verify that all these terms mystically cancel out, leaving us zero

\begin{aligned}\left[{D},{L_1}\right] = 0\end{aligned}

There surely must be an easier way to demonstrate this. Likely utilizing the commutator relationships derived earlier.

### 7. $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}$

We will need to evaluate $D^k x_i$. We have the first power from our commutator relation

\begin{aligned}D x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)\end{aligned}

A successive application of this operator therefore yields

\begin{aligned}D^2 x_i &= D x_i \left( D - \frac{ 2 i \hbar }{r} \right) \\ &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^2 \\ \end{aligned}

So we have

\begin{aligned}D^k x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^k \\ \end{aligned}

This now preps us to expand the first product in the desired exponential sandwich

\begin{aligned}e^{i\alpha D/\hbar} x_i&=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i D}{\hbar} \right)^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k D^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k x_i \\ &= x_i e^{ \frac{i \alpha }{\hbar} \left( D - \frac{ 2 i \hbar }{r} \right) } \\ &= x_i e^{ 2 \alpha /r } e^{ i \alpha D /\hbar }.\end{aligned}

The exponential sandwich then produces

\begin{aligned}e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} &= e^{2 \alpha/r } x_i \end{aligned}

Note that this isn’t the value we are supposed to get. Either my value for $D x_i$ is off by a factor of $2/r$ or the problem in the text contains a typo.

## 8. Reduction of some commutators using the fundamental commutator relation.

Using the fundamental commutation relation

\begin{aligned}\left[{p},{x}\right] = -i \hbar,\end{aligned}

which we can also write as

\begin{aligned}p x = x p -i \hbar,\end{aligned}

expand $\left[{x},{p^2}\right]$, $\left[{x^2},{p}\right]$, and $\left[{x^2},{p^2}\right]$.

The first is

\begin{aligned}\left[{x},{p^2}\right] &= x p^2 - p^2 x \\ &= x p^2 - p (p x) \\ &= x p^2 - p (x p -i \hbar) \\ &= x p^2 - (x p -i \hbar) p + i \hbar p \\ &= 2 i \hbar p \\ \end{aligned}

The second is

\begin{aligned}\left[{x^2},{p}\right] &= x^2 p - p x^2 \\ &= x^2 p - (x p - i\hbar) x \\ &= x^2 p - x (x p - i\hbar) + i \hbar x \\ &= 2 i \hbar x \\ \end{aligned}

Note that it is helpful for the last reduction of this problem to observe that we can write this as

\begin{aligned}p x^2 &= x^2 p - 2 i \hbar x \\ \end{aligned}

Finally for this last we have

\begin{aligned}\left[{x^2},{p^2}\right] &= x^2 p^2 - p^2 x^2 \\ &= x^2 p^2 - p (x^2 p - 2 i \hbar x) \\ &= x^2 p^2 - (x^2 p - 2 i \hbar x) p + 2 i \hbar (x p - i \hbar) \\ &= 4 i \hbar x p - 2 (i \hbar)^2 \\ \end{aligned}

That’s about as reduced as this can be made, but it is not very tidy looking. From this point we can simplify it a bit by factoring

\begin{aligned}\left[{x^2},{p^2}\right] &= 4 i \hbar x p - 2 (i \hbar)^2 \\ &= 2 i \hbar ( 2 x p - i \hbar) \\ &= 2 i \hbar ( x p + p x ) \\ &= 2 i \hbar \left\{{x},{p}\right\} \end{aligned}

## 9. Finite displacement operator.

### 9. Part I.

For

\begin{aligned}F(d) = e^{-i p d/\hbar},\end{aligned}

the first part of this problem is to show that

\begin{aligned}\left[{x},{F(d)}\right] = x F(d) - F(d) x = d F(d)\end{aligned}

We need to evaluate

\begin{aligned}e^{-i p d/\hbar} x = \sum_{k=0}^\infty \frac{1}{{k!}} \left( \frac{-i p d}{\hbar} \right)^k x.\end{aligned}

To do so requires a reduction of $p^k x$. For $k=2$ we have

\begin{aligned}p^2 x &= p ( x p - i\hbar ) \\ &= ( x p - i\hbar ) p - i \hbar p \\ &= x p^2 - 2 i\hbar p.\end{aligned}

For the cube we get $p^3 x = x p^3 - 3 i\hbar p^2$, supplying confirmation of an induction hypothesis $p^k x = x p^k - k i\hbar p^{k-1}$, which can be verified

\begin{aligned}p^{k+1} x &= p ( x p^k - k i \hbar p^{k-1}) \\ &= (x p - i\hbar) p^k - k i \hbar p^k \\ &= x p^{k+1} - (k+1) i \hbar p^k \qquad\square\end{aligned}

For our exponential we then have

\begin{aligned}e^{-i p d/\hbar} x &= x + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i d}{\hbar} \right)^k (x p^k - k i\hbar p^{k-1}) \\ &= x e^{-i p d /\hbar }+ \sum_{k=1}^\infty \frac{1}{{(k-1)!}} \left( \frac{-i p d}{\hbar} \right)^{k-1} (-i d/\hbar)(- i\hbar) \\ &= ( x - d ) e^{-i p d /\hbar }.\end{aligned}

Put back into our commutator we have

\begin{aligned}\left[{x},{e^{-i p d/\hbar}}\right] = d e^{-ip d/\hbar},\end{aligned}

completing the proof.

### 9. Part II.

For state ${\lvert {\alpha} \rangle}$ with ${\lvert {\alpha_d} \rangle} = F(d) {\lvert {\alpha} \rangle}$, show that the expectation values satisfy

\begin{aligned}\left\langle{{X}}\right\rangle_d = \left\langle{{X}}\right\rangle + d\end{aligned}

\begin{aligned}\left\langle{{X}}\right\rangle_d &={\langle {\alpha_d} \rvert} X {\lvert {\alpha_d} \rangle} \\ &=\iint dx' dx'' \left\langle{{\alpha_d}} \vert {{x'}}\right\rangle {\langle {x'} \rvert} X {\lvert {x''} \rangle} \left\langle{{x''}} \vert {{\alpha_d}}\right\rangle \\ &=\iint dx' dx'' \alpha_d^{*}{x'} \delta(x' -x'') x' \alpha_d(x'') \\ &=\int dx' \alpha_d^{*}(x') x' \alpha_d(x') \\ \end{aligned}

But

\begin{aligned}\alpha_d(x') &= \exp\left( -\frac{i d }{\hbar} (-i\hbar) \frac{\partial}{\partial x'} \right) \alpha(x') \\ &= e^{- d \frac{\partial}{\partial x'} } \alpha(x') \\ &= \alpha(x' - d),\end{aligned}

so our position expectation is

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx' \alpha^{*}(x' -d) x' \alpha(x'- d).\end{aligned}

A change of variables $x = x' -d$ gives us

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx \alpha^{*}(x) (x + d) \alpha(x) \\ \left\langle{{X}}\right\rangle + d+ d \int dx \alpha^{*}{x} \alpha(x) \qquad\square\end{aligned}

## 10. Hamiltonian position commutator and double commutator

For

\begin{aligned}H = \frac{1}{{2m}} p^2 + V(x)\end{aligned}

calculate $\left[{H},{x}\right]$, and $\left[{\left[{H},{x}\right]},{x}\right]$.

These are

\begin{aligned}\left[{H},{x}\right]&=\frac{1}{{2m}} p^2 x + V(x) x -\frac{1}{{2m}} x p^2 - x V(x) \\ &=\frac{1}{{2m}} p ( x p - i \hbar) -\frac{1}{{2m}} x p^2 \\ &=\frac{1}{{2m}} \left( ( x p - i \hbar) p -i \hbar p \right) -\frac{1}{{2m}} x p^2 \\ &=-\frac{i\hbar p}{m} \\ \end{aligned}

and

\begin{aligned}\left[{\left[{H},{x}\right]},{x}\right]&=-\frac{i\hbar }{m} \left[{p},{x}\right] \\ &=\frac{(-i\hbar)^2 }{m} \\ &=-\frac{\hbar^2 }{m} \\ \end{aligned}

We also have to show that

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 = \frac{\hbar^2}{2m}\end{aligned}

Expanding the absolute value in terms of conjugates we have

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 &= \sum_k (E_k -E_n) {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x E_k {\lvert {k} \rangle} -{\langle {k} \rvert} x E_n {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {n} \rvert} x H {\lvert {k} \rangle} {\langle {k} \rvert} x {\lvert {n} \rangle} - {\langle {n} \rvert} x {\lvert {k} \rangle} {\langle {k} \rvert} x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x H x {\lvert {n} \rangle} - {\langle {n} \rvert} x x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x \left[{H},{x}\right] {\lvert {n} \rangle} \\ &= -\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} \\ \end{aligned}

It is not obvious where to go from here. Taking the clue from the problem that the result involves the double commutator, we have

\begin{aligned}- \frac{\hbar^2}{m}&={\langle {n} \rvert} \left[{\left[{H},{x}\right]},{x}\right] {\lvert {n} \rangle} \\ &={\langle {n} \rvert} H x^2 - 2 x H x + x^2 H {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} x H x {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} ( -\left[{H},{x}\right] + H x) x {\lvert {n} \rangle} \\ &=2 {\langle {n} \rvert} \left[{H},{x}\right] x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} p x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p - i \hbar {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} +\frac{2 (i \hbar)^2}{m} \end{aligned}

So, somewhat flukily by working backwards, with a last rearrangement, we now have

\begin{aligned}-\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} &= \frac{\hbar^2}{m} -\frac{\hbar^2}{2 m} \\ &= \frac{\hbar^2}{2 m}\end{aligned}

Substitution above gives the desired result. This is extremely ugly, and doesn’t follow any sort of logical progression. Is there a good way to sequence this proof?

## 11. Another double commutator.

### Attempt 1. Incomplete.

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{r})\end{aligned}

use $\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]$ to obtain

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

First evaluate the commutators. The first is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right]\end{aligned}

The Laplacian applied to this exponential is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } \Psi&=\partial_m \partial_m e^{i k_n x_n } \Psi \\ &=\partial_m (i k_m e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + e^{i \mathbf{k} \cdot \mathbf{r} } \partial_m \Psi ) \\ &=- \mathbf{k}^2 e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + i e^{i \mathbf{k} \cdot \mathbf{r} } \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ i e^{i \mathbf{k} \cdot \mathbf{r}} \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ e^{i \mathbf{k} \cdot \mathbf{r}} \boldsymbol{\nabla}^2 \Psi\end{aligned}

Factoring out the exponentials this is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } &=e^{i \mathbf{k}\cdot \mathbf{r}} \left(- \mathbf{k}^2 + 2 i \mathbf{k} \cdot \boldsymbol{\nabla} + \boldsymbol{\nabla}^2 \right),\end{aligned}

and in terms of $\mathbf{p}$, we have

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k}\cdot \mathbf{r}} &= e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} + \mathbf{p}^2 \right)=e^{i \mathbf{k}\cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2\end{aligned}

So, finally, our first commutator is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}}e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right)\end{aligned} \hspace{\stretch{1}}(3.16)

The double commutator is then

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i\mathbf{k} \cdot \mathbf{r}} \frac{\hbar \mathbf{k}}{m} \cdot \left( \mathbf{p} e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} \mathbf{p} \right)\end{aligned}

To simplify this we want

\begin{aligned}\mathbf{k} \cdot \boldsymbol{\nabla} e^{-i \mathbf{k} \cdot \mathbf{r}} \Psi &=k_n \partial_n e^{-i k_m x_m } \Psi \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} }\left(k_n (-i k_n) \Psi + k_n \partial_n \Psi \right) \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} } \left( -i \mathbf{k}^2 + \mathbf{k} \cdot \boldsymbol{\nabla} \right) \Psi\end{aligned}

The double commutator is then left with just

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=- \frac{1}{{m}} (\hbar \mathbf{k})^2 \end{aligned} \hspace{\stretch{1}}(3.17)

Now, returning to the energy expression

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2&=\sum_n (E_n - E_s) {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=\sum_n {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} \left[{H},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right){\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} (\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ &=\frac{(\hbar\mathbf{k})^2}{2m} + \frac{1}{{m}} {\langle {s} \rvert} (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

I can’t figure out what to do with the $\hbar \mathbf{k} \cdot \mathbf{p}$ expectation, and keep going around in circles.

I figure there is some trick related to the double commutator, so expanding the expectation of that seems appropriate

\begin{aligned}-\frac{1}{{m}} (\hbar \mathbf{k})^2 &={\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &={\langle {s} \rvert} \left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right] e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}}\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &=\frac{1}{{2m }} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) {\lvert {s} \rangle} \\ &=\frac{1}{{m}} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-\hbar \mathbf{k} \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

### Attempt 2.

I was going in circles above. With the help of betel on physicsforums, I got pointed in the right direction. Here’s a rework of this problem from scratch, also benefiting from hindsight.

Our starting point is the same, with the evaluation of the first commutator

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right].\end{aligned} \hspace{\stretch{1}}(3.18)

To continue we need to know how the momentum operator acts on an exponential of this form

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} \Psi&=-i \hbar \mathbf{e}_m \partial_m e^{\pm i k_n x_n } \Psi \\ &=e^{\pm i \mathbf{k} \cdot \mathbf{r}} \left( -i \hbar (\pm i \mathbf{e}_m k_m ) \Psi -i \hbar \mathbf{e}_m \partial_m \Psi\right).\end{aligned}

This gives us the helpful relationship

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} = e^{\pm i \mathbf{k} \cdot \mathbf{r}} (\mathbf{p} \pm \hbar \mathbf{k}).\end{aligned} \hspace{\stretch{1}}(3.19)

Squared application of the momentum operator on the positive exponential found in the first commutator 3.18, gives us

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k} \cdot \mathbf{r}} = e^{i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2 = e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p} + \mathbf{p}^2),\end{aligned} \hspace{\stretch{1}}(3.20)

with which we can evaluate this first commutator.

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}).\end{aligned} \hspace{\stretch{1}}(3.21)

For the double commutator we have

\begin{aligned}2m \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) \\ &=e^{i \mathbf{k} \cdot \mathbf{r}} 2 (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-2 \hbar \mathbf{k} \cdot \mathbf{p} \\ &=2 \hbar \mathbf{k} \cdot (\mathbf{p} - \hbar \mathbf{k})-2 \hbar \mathbf{k} \cdot \mathbf{p},\end{aligned}

so for the double commutator we have just a scalar

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&= -\frac{(\hbar \mathbf{k})^2}{m}.\end{aligned} \hspace{\stretch{1}}(3.22)

Now consider the expectation of this double commutator, expanded with some unintuitive steps that have been motivated by working backwards

\begin{aligned}{\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle}&={\langle {s} \rvert} 2 H - e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k} \cdot \mathbf{r}} H- 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle}- {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n E_s {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}- E_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n (E_s - E_n){\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

By 3.22, we have completed the problem

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2 &= \frac{(\hbar \mathbf{k})^2}{2m}.\end{aligned} \hspace{\stretch{1}}(3.23)

There is one subtlety above that is worth explicit mention before moving on, in particular, I did not find it intuitive that the following was true

\begin{aligned}{\langle {s} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} + e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.24)

However, observe that both of these exponential sandwich operators $e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}$, and $e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}}$ are Hermitian, since we have for example

\begin{aligned}(e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger&= (e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger H^\dagger (e^{i \mathbf{k} \cdot \mathbf{r}})^\dagger \\ &= e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}\end{aligned}

Also observe that these operators are both complex conjugates of each other, and with $\mathbf{k} \cdot \mathbf{r} = \alpha$ for short, can be written

\begin{aligned}e^{i \alpha} H e^{-i \alpha}&= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha+ i\sin\alpha H \cos \alpha -i \cos\alpha H \sin\alpha \\ e^{-i \alpha} H e^{i \alpha} &= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha- i\sin\alpha H \cos \alpha +i \cos\alpha H \sin\alpha\end{aligned} \hspace{\stretch{1}}(3.25)

Because $H$ is real valued, and the expectation value of a Hermitian operator is real valued, none of the imaginary terms can contribute to the expectation values, and in the summation of 3.24 we can thus pick and double either of the exponential sandwich terms, as desired.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[4] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf.

## Unitary exponentialÂ sandwich

Posted by peeterjoot on September 27, 2010

# Motivation.

One of the chapter II exercises in [1] involves a commutator exponential sandwich of the form

\begin{aligned}e^{i F} B e^{-iF}\end{aligned} \hspace{\stretch{1}}(1.1)

where $F$ is Hermitian. Asking about commutators on physicsforums I was told that such sandwiches (my term) preserve expectation values, and also have a Taylor series like expansion involving the repeated commutators. Let’s derive the commutator relationship.

# Guts

Let’s expand a sandwich of this form in series, and shuffle the summation order so that we sum over all the index plane diagonals $k + m = \text{constant}$. That is

\begin{aligned}e^{A} B e^{-A}&=\sum_{k,m=0}^\infty \frac{1}{{k!m!}} A^k B (-A)^m \\ &=\sum_{r=0}^\infty \sum_{m=0}^r \frac{1}{{(r-m)!m!}} A^{r-m} B (-A)^m \\ &=\sum_{r=0}^\infty \frac{1}{{r!}} \sum_{m=0}^r \frac{r!}{(r-m)!m!} A^{r-m} B (-A)^m \\ &=\sum_{r=0}^\infty \frac{1}{{r!}} \sum_{m=0}^r \binom{r}{m} A^{r-m} B (-A)^m.\end{aligned}

Assuming that these interior sums can be written as commutators, we’ll shortly have an induction exercise. Let’s write these out for a couple values of $r$ to get a feel for things.

## $r=1$

\begin{aligned}\binom{1}{0} A B + \binom{1}{1} B (-A) &= \left[{A},{B}\right]\end{aligned}

## $r=2$

\begin{aligned}\binom{2}{0} A^2 B + \binom{2}{1} A B (-A) + \binom{2}{2} B (-A)^2 &= A^2 B - 2 A B A + B A\end{aligned}

This compares exactly to the double commutator:

\begin{aligned}\left[{A},{\left[{A},{B}\right]}\right]&= A(A B - B A) -(A B - B A)A \\ &= A^2 B - A B A - A B A + B A^2 \\ &= A^2 B - 2 A B A + B A^2 \\ \end{aligned}

## $r=3$

\begin{aligned}\binom{3}{0} A^3 B + \binom{3}{1} A^2 B (-A) + \binom{3}{2} A B (-A)^2 + \binom{3}{3} B (-A)^3 &= A^3 B - 3 A^2 B A + 3 A B A^2 - B A^3.\end{aligned}

And this compares exactly to the triple commutator

\begin{aligned} [A, [A, [A, B] ] ] &= A^3 B - 2 A^2 B A + A B A^2 -(A^2 B A - 2 A B A^2 + B A^3) \\ &= A^3 B - 3 A^2 B A + 3 A B A^2 -B A^3 \\ \end{aligned}

The induction pattern is clear. Let’s write the $r$ fold commutator as

\begin{aligned}C_r(A,B) &\equiv \underbrace{[A, [A, \cdots, [A,}_{r \text{times}} B]] \cdots ] = \sum_{m=0}^r \binom{r}{m} A^{r-m} B (-A)^m,\end{aligned} \hspace{\stretch{1}}(2.2)

and calculate this for the $r+1$ case to verify the induction hypothesis. We have

\begin{aligned}C_{r+1}(A,B) &= \sum_{m=0}^r \binom{r}{m} \left( A^{r-m+1} B (-A)^m-A^{r-m} B (-A)^{m} A \right) \\ &= \sum_{m=0}^r \binom{r}{m} \left( A^{r-m+1} B (-A)^m+A^{r-m} B (-A)^{m+1} \right) \\ &= A^{r+1} B+ \sum_{m=1}^r \binom{r}{m} A^{r-m+1} B (-A)^m+ \sum_{m=0}^{r-1} \binom{r}{m} A^{r-m} B (-A)^{m+1} + B (-A)^{r+1} \\ &= A^{r+1} B+ \sum_{k=0}^{r-1} \binom{r}{k+1} A^{r-k} B (-A)^{k+1}+ \sum_{m=0}^{r-1} \binom{r}{m} A^{r-m} B (-A)^{m+1} + B (-A)^{r+1} \\ &= A^{r+1} B+ \sum_{k=0}^{r-1} \left( \binom{r}{k+1} + \binom{r}{k} \right) A^{r-k} B (-A)^{k+1}+ B (-A)^{r+1} \\ \end{aligned}

We now have to sum those binomial coefficients. I like the search and replace technique for this, picking two visibly distinct numbers for $r$, and $k$ that are easy to manipulate without abstract confusion. How about $r=7$, and $k=3$. Using those we have

\begin{aligned}\binom{7}{3+1} + \binom{7}{3} &=\frac{7!}{(3+1)!(7-3-1)!}+\frac{7!}{3!(7-3)!} \\ &=\frac{7!(7-3)}{(3+1)!(7-3)!}+\frac{7!(3+1)}{(3+1)!(7-3)!} \\ &=\frac{7! \left( 7-3 + 3 + 1 \right) }{(3+1)!(7-3)!} \\ &=\frac{(7+1)! }{(3+1)!((7+1)-(3+1))!}.\end{aligned}

Straight text replacement of $7$ and $3$ with $r$ and $k$ respectively now gives the harder to follow, but more general identity

\begin{aligned}\binom{r}{k+1} + \binom{r}{k} &=\frac{r!}{(k+1)!(r-k-1)!}+\frac{r!}{k!(r-k)!} \\ &=\frac{r!(r-k)}{(k+1)!(r-k)!}+\frac{r!(k+1)}{(k+1)!(r-k)!} \\ &=\frac{r! \left( r-k + k + 1 \right) }{(k+1)!(r-k)!} \\ &=\frac{(r+1)! }{(k+1)!((r+1)-(k+1))!} \\ &=\binom{r+1}{k+1}\end{aligned}

For our commutator we now have

\begin{aligned}C_{r+1}(A,B) &= A^{r+1} B+ \sum_{k=0}^{r-1} \binom{r+1}{k+1} A^{r-k} B (-A)^{k+1} + B (-A)^{r+1} \\ &= A^{r+1} B+ \sum_{s=1}^{r} \binom{r+1}{s} A^{r+1-s} B (-A)^{s} + B (-A)^{r+1} \\ &= \sum_{s=0}^{r+1} \binom{r+1}{s} A^{r+1-s} B (-A)^{s} \qquad\square\end{aligned}

That completes the inductive proof and allows us to write

\begin{aligned}e^A B e^{-A}&=\sum_{r=0}^\infty \frac{1}{{r!}} C_{r}(A,B),\end{aligned} \hspace{\stretch{1}}(2.3)

Or, in explicit form

\begin{aligned}e^A B e^{-A}&=B + \frac{1}{{1!}} \left[{A},{B}\right]+ \frac{1}{{2!}} \left[{A},{\left[{A},{B}\right]}\right]+ \cdots\end{aligned} \hspace{\stretch{1}}(2.4)

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Bivector Geometry in Geometric Algebra.

Posted by peeterjoot on September 20, 2009

# Motivation.

Consider the derivative of a vector parametrized bivector square such as

\begin{aligned}\frac{d}{d\lambda} {(\mathbf{x} \wedge \mathbf{k})^2} = \left(\frac{d\mathbf{x}}{d\lambda} \wedge \mathbf{k}\right) \left(\mathbf{x} \wedge \mathbf{k}\right)+\left(\mathbf{x} \wedge \mathbf{k}\right) \left(\frac{d \mathbf{x}}{d\lambda} \wedge \mathbf{k}\right)\end{aligned}

where $\mathbf{k}$ is constant. In this case, the left hand side is a scalar so the right hand side, this symmetric product of bivectors must also be a scalar. In the more general case, do we have any reason to assume a symmetric bivector product is a scalar as is the case for the symmetric vector product?

Here this question is considered, and examination of products of intersecting bivectors is examined. We take intersecting bivectors to mean that there a common vector ($\mathbf{k}$ above) can be factored from both of the two bivectors, leaving a vector remainder. Since all non coplanar bivectors in $\mathbb{R}^{3}$ intersect this examination will cover the important special case of three dimensional plane geometry.

A result of this examination is that many of the concepts familiar from vector geometry such as orthogonality, projection, and rejection will have direct bivector equivalents.

General bivector geometry, in spaces where non-coplanar bivectors do not necessarily intersect (such as in $\mathbb{R}^{4}$) is also considered. Some of the results require plane intersection, or become simpler in such circumstances. This will be pointed out when appropriate.

# Components of grade two multivector product.

The geometric product of two bivectors can be written:

\begin{aligned}\mathbf{A} \mathbf{B} = {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{0}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{4}}= {\mathbf{A} \cdot \mathbf{B}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}}\end{aligned} \quad\quad\quad(1)

\begin{aligned}\mathbf{B} \mathbf{A} = {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{0}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{4}}= {\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\mathbf{B} \wedge \mathbf{A}}\end{aligned} \quad\quad\quad(2)

Because we have three terms involved, unlike the vector dot and wedge product we cannot generally separate these terms by symmetric and antisymmetric parts. However forming those sums will still worthwhile, especially for the case of intersecting bivectors since the last term will be zero in that case.

## Sign change of each grade term with commutation.

Starting with the last term we can first observe that

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = \mathbf{B} \wedge \mathbf{A}\end{aligned} \quad\quad\quad(3)

To show this let $\mathbf{A} = \mathbf{a} \wedge \mathbf{b}$, and $\mathbf{B} = \mathbf{c} \wedge \mathbf{d}$. When

$\mathbf{A} \wedge \mathbf{B} \ne 0$, one can write:

\begin{aligned}\mathbf{A} \wedge \mathbf{B} &= \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \wedge \mathbf{d} \\ &= - \mathbf{b} \wedge \mathbf{c} \wedge \mathbf{d} \wedge \mathbf{a} \\ &= \mathbf{c} \wedge \mathbf{d} \wedge \mathbf{a} \wedge \mathbf{b} \\ &= \mathbf{B} \wedge \mathbf{A} \\ \end{aligned}

To see how the signs of the remaining two terms vary with commutation form:

\begin{aligned}(\mathbf{A} + \mathbf{B})^2&= (\mathbf{A} + \mathbf{B})(\mathbf{A} + \mathbf{B}) \\ &= \mathbf{A}^2 + \mathbf{B}^2 + \mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} \\ \end{aligned}

When $\mathbf{A}$ and $\mathbf{B}$ intersect we can write $\mathbf{A} = \mathbf{a} \wedge \mathbf{x}$, and $\mathbf{B} = \mathbf{b} \wedge \mathbf{x}$, thus the sum is a bivector

\begin{aligned}(\mathbf{A} + \mathbf{B})= (\mathbf{a} + \mathbf{b}) \wedge \mathbf{x}\end{aligned}

And so, the square of the two is a scalar. When $\mathbf{A}$ and $\mathbf{B}$ have only non intersecting components, such as the grade two $\mathbb{R}^{4}$ multivector $\mathbf{e}_{12} + \mathbf{e}_{34}$, the square of this sum will have both grade four and scalar parts.

Since the LHS = RHS, and the grades of the two also must be the same. This implies that the quantity

\begin{aligned}\mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} = \mathbf{A} \cdot \mathbf{B} + \mathbf{B} \cdot \mathbf{A}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}+\mathbf{A} \wedge \mathbf{B} + \mathbf{B} \wedge \mathbf{A}\end{aligned}

is a scalar $\iff$ $\mathbf{A} + \mathbf{B}$ is a bivector, and in general has scalar and grade four terms. Because this symmetric sum has no grade two terms, regardless of whether $\mathbf{A}$, and $\mathbf{B}$ intersect, we have:

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + {\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2} = 0\end{aligned}

\begin{aligned}\implies{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = -{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(4)

One would intuitively expect $\mathbf{A} \cdot \mathbf{B} = \mathbf{B} \cdot \mathbf{A}$. This can be demonstrated by forming the complete symmetric sum

\begin{aligned}\mathbf{A} \mathbf{B} + \mathbf{B} \mathbf{A} &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}} +{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}} + {\mathbf{B} \wedge \mathbf{A}} \\ &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}} -{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{{2}}+{\mathbf{A} \wedge \mathbf{B}} + {\mathbf{A} \wedge \mathbf{B}} \\ &= {\mathbf{A} \cdot \mathbf{B}} +{\mathbf{B} \cdot \mathbf{A}}+2{\mathbf{A} \wedge \mathbf{B}} \\ \end{aligned}

The LHS commutes with interchange of $\mathbf{A}$ and $\mathbf{B}$, as does ${\mathbf{A} \wedge \mathbf{B}}$. So for the RHS to also commute, the remaining grade 0 term must also:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = \mathbf{B} \cdot \mathbf{A}\end{aligned} \quad\quad\quad(5)

## Dot, wedge and grade two terms of bivector product.

Collecting the results of the previous section and substituting back into equation 1 we have:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = {\left\langle{{\frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}}}\right\rangle}_{{0}}\end{aligned} \quad\quad\quad(6)

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = \frac{\mathbf{A} \mathbf{B} - \mathbf{B}\mathbf{A}}{2}\end{aligned} \quad\quad\quad(7)

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = {\left\langle{{\frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}}}\right\rangle}_{{4}}\end{aligned} \quad\quad\quad(8)

When these intersect in a line the wedge term is zero, so for that special case we can write:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = \frac{\mathbf{A} \mathbf{B} + \mathbf{B}\mathbf{A}}{2}\end{aligned}

\begin{aligned}{\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} = \frac{\mathbf{A} \mathbf{B} - \mathbf{B}\mathbf{A}}{2}\end{aligned}

\begin{aligned}\mathbf{A} \wedge \mathbf{B} = 0\end{aligned}

(note that this is always the case for $\mathbb{R}^{3}$).

# Intersection of planes.

Starting with two planes specified parametrically, each in terms of two direction vectors and a point on the plane:

\begin{aligned}\mathbf{x} &= \mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v} \\ \mathbf{y} &= \mathbf{q} + a \mathbf{w} + b \mathbf{z} \\ \end{aligned} \quad\quad\quad(9)

If these intersect then all points on the line must satisfy $\mathbf{x} = \mathbf{y}$, so the solution requires:

\begin{aligned}\mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v} = \mathbf{q} + a \mathbf{w} + b \mathbf{z}\end{aligned}

\begin{aligned}\implies(\mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v}) \wedge \mathbf{w} \wedge \mathbf{z} = (\mathbf{q} + a \mathbf{w} + b \mathbf{z}) \wedge \mathbf{w} \wedge \mathbf{z} = \mathbf{q} \wedge \mathbf{w} \wedge \mathbf{z}\end{aligned}

Rearranging for $\beta$, and writing $\mathbf{B} = \mathbf{w} \wedge \mathbf{z}$:

\begin{aligned}\beta = \frac{\mathbf{q} \wedge \mathbf{B} - (\mathbf{p} + \alpha \mathbf{u}) \wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\end{aligned}

Note that when the solution exists the left vs right order of the division by $\mathbf{v} \wedge \mathbf{B}$ should not matter since the numerator will be proportional to this bivector (or else the $\beta$ would not be a scalar).

Substitution of $\beta$ back into $\mathbf{x} = \mathbf{p} + \alpha \mathbf{u} + \beta \mathbf{v}$ (all points in the first plane) gives you a parametric equation for a line:

\begin{aligned}\mathbf{x} = \mathbf{p} + \frac{(\mathbf{q}-\mathbf{p})\wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\mathbf{v} + \alpha\frac{1}{\mathbf{v} \wedge \mathbf{B}}((\mathbf{v} \wedge \mathbf{B}) \mathbf{u} - (\mathbf{u} \wedge \mathbf{B})\mathbf{v})\end{aligned}

Where a point on the line is:

\begin{aligned}\mathbf{p} + \frac{(\mathbf{q}-\mathbf{p})\wedge \mathbf{B}}{\mathbf{v} \wedge \mathbf{B}}\mathbf{v} \end{aligned}

And a direction vector for the line is:

\begin{aligned}\frac{1}{\mathbf{v} \wedge \mathbf{B}}((\mathbf{v} \wedge \mathbf{B}) \mathbf{u} - (\mathbf{u} \wedge \mathbf{B})\mathbf{v})\end{aligned}

\begin{aligned}\propto(\mathbf{v} \wedge \mathbf{B})^2 \mathbf{u} - (\mathbf{v} \wedge \mathbf{B})(\mathbf{u} \wedge \mathbf{B})\mathbf{v}\end{aligned}

Now, this result is only valid if $\mathbf{v} \wedge \mathbf{B} \ne 0$ (ie: line of intersection is not directed along $\mathbf{v}$), but if that is the case the second form will be zero. Thus we can add the results (or any non-zero linear combination of) allowing for either of $\mathbf{u}$, or $\mathbf{v}$ to be directed along the line of intersection:

\begin{aligned}a\left( (\mathbf{v} \wedge \mathbf{B})^2 \mathbf{u}- (\mathbf{v} \wedge \mathbf{B})(\mathbf{u} \wedge \mathbf{B})\mathbf{v} \right)+ b\left((\mathbf{u} \wedge \mathbf{B})^2 \mathbf{v} - (\mathbf{u} \wedge \mathbf{B})(\mathbf{v} \wedge \mathbf{B})\mathbf{u}\right)\end{aligned} \quad\quad\quad(12)

Alternately, one could formulate this in terms of $\mathbf{A} = \mathbf{u} \wedge \mathbf{v}$, $\mathbf{w}$, and $\mathbf{z}$. Is there a more symmetrical form for this direction vector?

## Vector along line of intersection in $\mathbb{R}^{3}$

For $\mathbb{R}^{3}$ one can solve the intersection problem using the normals to the planes. For simplicity put the origin on the line of intersection (and all planes through a common point in $\mathbb{R}^{3}$ have at least a line of intersection). In this case, for bivectors $\mathbf{A}$ and $\mathbf{B}$, normals to those planes are $i\mathbf{A}$, and $i\mathbf{B}$ respectively. The plane through both of those normals is:

\begin{aligned}(i\mathbf{A}) \wedge (i\mathbf{B})= \frac{(i\mathbf{A})(i\mathbf{B}) - (i\mathbf{B})(i\mathbf{A})}{2} = \frac{\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B}}{2} = {\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}\end{aligned}

The normal to this plane

\begin{aligned}i{\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(13)

is directed along the line of intersection. This result is more appealing than the general $\mathbb{R}^{N}$ result of equation 12, not just because it is simpler, but also because it is a function of only the bivectors for the planes, without a requirement to find or calculate two specific independent direction vectors in one of the planes.

## Applying this result to $\mathbb{R}^{N}$

If you reject the component of $\mathbf{A}$ from $\mathbf{B}$ for two intersecting bivectors:

\begin{aligned}\text{Rej}_{\mathbf{A}}(\mathbf{B}) = \frac{1}{\mathbf{A}}{\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}\end{aligned}

the line of intersection remains the same … that operation rotates $\mathbf{B}$ so that the two are mutually perpendicular. This essentially reduces the problem to that of the three dimensional case, so the solution has to be of the same form… you just need to calculate a “pseudoscalar” (what you are calling the join), for the subspace spanned by the two bivectors.

That can be computed by taking any direction vector that is on one plane, but isn’t in the second. For example, pick a vector $\mathbf{u}$ in the plane $\mathbf{A}$ that is not on the intersection of $\mathbf{A}$ and $\mathbf{B}$. In mathese that is $\mathbf{u} = \frac{1}{\mathbf{A}}(\mathbf{A}\cdot \mathbf{u})$ (or $\mathbf{u} \wedge \mathbf{A} = 0$), where $\mathbf{u} \wedge \mathbf{B} \ne 0$. Thus a pseudoscalar for this subspace is:

\begin{aligned}\mathbf{i} = \frac{\mathbf{u} \wedge \mathbf{B}}{{\left\lvert{\mathbf{u} \wedge \mathbf{B}}\right\rvert}}\end{aligned}

To calculate the direction vector along the intersection we don’t care about the scaling above. Also note that provided $\mathbf{u}$ has a component in the plane $\mathbf{A}$, $\mathbf{u} \cdot \mathbf{A}$ is also in the plane (it’s rotated $\pi/2$ from $\frac{1}{\mathbf{A}}(\mathbf{A} \cdot \mathbf{u})$.

Thus, provided that $\mathbf{u} \cdot \mathbf{A}$ isn’t on the intersection, a scaled “pseudoscalar”
for the subspace can be calculated by taking from any vector $\mathbf{u}$ with a component in the plane $\mathbf{A}$:

\begin{aligned}\mathbf{i} \propto (\mathbf{u} \cdot \mathbf{A}) \wedge \mathbf{B}\end{aligned}

Thus a vector along the intersection is:

\begin{aligned}\mathbf{d} = ((\mathbf{u} \cdot \mathbf{A}) \wedge \mathbf{B}) {\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(14)

Interchange of $\mathbf{A}$ and $\mathbf{B}$ in either the trivector or bivector terms above would also work.

Without showing the steps one can write the complete parametric solution of the line through the planes of equations 9 in terms of this direction vector:

\begin{aligned}\mathbf{x} = \mathbf{p} + \left(\frac{(\mathbf{q} - \mathbf{p})\wedge \mathbf{B}}{(\mathbf{d} \cdot \mathbf{A}) \wedge \mathbf{B}}\right) (\mathbf{d} \cdot \mathbf{A}) + \alpha \mathbf{d}\end{aligned} \quad\quad\quad(15)

Since $(\mathbf{d} \cdot \mathbf{A}) \ne 0$ and $(\mathbf{d} \cdot \mathbf{A}) \wedge \mathbf{B} \ne 0$ (unless $\mathbf{A}$ and $\mathbf{B}$ are coplanar), observe that this is a natural generator of the pseudoscalar for the subspace, and as such shows up in the expression above.

Also observe the non-coincidental similarity of the $\mathbf{q}-\mathbf{p}$ term to Cramer’s rule (a ration of determinants).

# Components of a grade two multivector

The procedure to calculate projections and rejections of planes onto planes is similar to a vector projection onto a space.

To arrive at that result we can consider the product of a grade two multivector $\mathbf{A}$ with a bivector $\mathbf{B}$ and its inverse (
the restriction that $\mathbf{B}$ be a bivector, a grade two multivector that can be written as a wedge product of two vectors, is required for general invertability).

\begin{aligned}\mathbf{A}\frac{1}{\mathbf{B}}\mathbf{B} &= \left(\mathbf{A} \cdot \frac{1}{\mathbf{B}} + {\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} + \mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \mathbf{B} \\ &= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ &+{\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \cdot \mathbf{B} +{\left\langle{{ {\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \mathbf{B} }}\right\rangle}_{2}+{\left\langle{{ \mathbf{A} \frac{1}{\mathbf{B}} }}\right\rangle}_{2} \wedge \mathbf{B} \\ &+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} +{\left\langle{{\mathbf{A} \wedge \frac{1}{\mathbf{B}} \mathbf{B}}}\right\rangle}_{4}+\mathbf{A} \wedge \frac{1}{\mathbf{B}} \wedge \mathbf{B} \\ \end{aligned}

Since $\frac{1}{\mathbf{B}} = -\frac{\mathbf{B}}{{{\left\lvert{\mathbf{B}}\right\rvert}}^2}$, this implies that the 6-grade term $\mathbf{A} \wedge \frac{1}{\mathbf{B}} \wedge \mathbf{B}$ is zero. Since the LHS has grade 2, this implies that the 0-grade and 4-grade terms are zero (also independently implies that the 6-grade term is zero). This leaves:

\begin{aligned}\mathbf{A}= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ +{\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2}+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} \end{aligned} \quad\quad\quad(16)

This could be written somewhat more symmetrically as

\begin{aligned}\mathbf{A}&=\sum_{i=0,2,4}{\left\langle{{{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{{i}} \mathbf{B}}}\right\rangle}_{2} \\ &= {\left\langle{{ \left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle \mathbf{B} +{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B} +{\left\langle{{\mathbf{A} \frac{1}{\mathbf{B}}}}\right\rangle}_{4} \mathbf{B} }}\right\rangle}_{2} \\ \end{aligned}

This is also a more direct way to derive the result in retrospect.

Looking at equation 16 we have three terms. The first is

\begin{aligned}\mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B}\end{aligned} \quad\quad\quad(17)

This is the component of $\mathbf{A}$ that lies in the plane $\mathbf{B}$ (the projection of $\mathbf{A}$ onto $\mathbf{B}$).

The next is

\begin{aligned}{\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2}\end{aligned} \quad\quad\quad(18)

If $\mathbf{B}$ and $\mathbf{A}$ have any intersecting components, this is the components of $\mathbf{A}$ from the intersection that are perpendicular to $\mathbf{B}$ with respect to the bivector dot product. ie: This is the rejective term.

And finally,

\begin{aligned}\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B}\end{aligned} \quad\quad\quad(19)

This is the remainder, the non-projective and non-coplanar terms. Greater than three dimensions is required to generate such a term. Example:

\begin{aligned}\mathbf{A} &= \mathbf{e}_{12} + \mathbf{e}_{23} + \mathbf{e}_{43} \\ \mathbf{B} &= \mathbf{e}_{34} \\ \end{aligned}

Product terms for these are:

\begin{aligned}\mathbf{A} \cdot \mathbf{B} &= 1 \\ {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} &= \mathbf{e}_{24} \\ \mathbf{A} \wedge \mathbf{B} &= \mathbf{e}_{1234} \\ \end{aligned}

The decomposition is thus:

\begin{aligned}\mathbf{A} = \left(\mathbf{A} \cdot \mathbf{B} + {\left\langle{{\mathbf{A} \mathbf{B}}}\right\rangle}_{2} + \mathbf{A} \wedge \mathbf{B}\right) \frac{1}{\mathbf{B}} = (1 + \mathbf{e}_{24} + \mathbf{e}_{1234}) \mathbf{e}_{43}\end{aligned}

## Closer look at the grade two term

The grade two term of equation 18 can be expanded using its antisymmetric bivector product representation

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}&= \frac{1}{{2}}\left(\mathbf{A}\frac{1}{\mathbf{B}} - \frac{1}{\mathbf{B}}\mathbf{A}\right) \mathbf{B} \\ &= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{\mathbf{B}}\mathbf{A} \mathbf{B}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{{\hat{\mathbf{B}}}}\mathbf{A} \hat{\mathbf{B}}\right) \\ \end{aligned}

Observe here one can restrict the examination to the case where $\mathbf{B}$ is a unit bivector without loss of generality.

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{i}}}}\right\rangle}_{2} \mathbf{i}&= \frac{1}{{2}}\left(\mathbf{A} + \mathbf{i}\mathbf{A}\mathbf{i}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} - \mathbf{i}^\dagger\mathbf{A}\mathbf{i}\right) \\ \end{aligned}

The second term is a rotation in the plane $\mathbf{i}$, by 180 degrees:

\begin{aligned}\mathbf{i}^\dagger\mathbf{A}\mathbf{i} = e^{-\mathbf{i} \pi/2}\mathbf{A} e^{\mathbf{i} \pi/2}\end{aligned}

So, any components of $\mathbf{A}$ that are completely in the plane cancel out (ie: the $\mathbf{A} \cdot \frac{1}{\mathbf{i}}\mathbf{i}$ component).

Also, if ${\left\langle{{\mathbf{A} \mathbf{i}}}\right\rangle}_{4} \ne 0$ then those components of $\mathbf{A} \mathbf{i}$ commute so

\begin{aligned}{\left\langle{{\mathbf{A} - \mathbf{i}^\dagger\mathbf{A}\mathbf{i}}}\right\rangle}_{4}&= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{{\mathbf{i}^\dagger\mathbf{A}\mathbf{i}}}\right\rangle}_{4} \\ &= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{{\mathbf{i}^\dagger\mathbf{i}\mathbf{A}}}\right\rangle}_{4} \\ &= {\left\langle{\mathbf{A}}\right\rangle}_{4} - {\left\langle{\mathbf{A}}\right\rangle}_{4} \\ &= 0 \\ \end{aligned}

This implies that we have only grade two terms, and the final grade selection in equation 18 can be dropped:

\begin{aligned} {\left\langle{{{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}}}\right\rangle}_{2} = {\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}\end{aligned} \quad\quad\quad(20)

It’s also possible to write this in a few alternate variations which are useful to list explicitly so that one can recognize them in other contexts:

\begin{aligned}{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}&= \frac{1}{{2}}\left(\mathbf{A} - \frac{1}{\mathbf{B}}\mathbf{A}\mathbf{B}\right) \\ &= \frac{1}{{2}}\left(\mathbf{A} + \hat{\mathbf{B}}\mathbf{A}\hat{\mathbf{B}}\right) \\ &= \frac{1}{{2}}\left( \hat{\mathbf{B}}\mathbf{A} -\mathbf{A}\hat{\mathbf{B}} \right)\hat{\mathbf{B}} \\ &= {\left\langle{{\hat{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2}\hat{\mathbf{B}} \\ &= \hat{\mathbf{B}}{\left\langle{{\mathbf{A}\hat{\mathbf{B}}}}\right\rangle}_{2} \\ \end{aligned}

## Projection and Rejection

Equation 20 can be substituted back into equation 16 yielding:

\begin{aligned}\mathbf{A} =\mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \\ +{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B}+\left(\mathbf{A} \wedge \frac{1}{\mathbf{B}}\right) \cdot \mathbf{B} \end{aligned} \quad\quad\quad(21)

Now, for the special case where $\mathbf{A} \wedge \mathbf{B} = 0$ (all bivector components of the grade two multivector $\mathbf{A}$ have a common vector with bivector $\mathbf{B}$) we can write

\begin{aligned}\mathbf{A} &= \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} +{\left\langle{{\mathbf{A}\frac{1}{\mathbf{B}}}}\right\rangle}_{2} \mathbf{B} \\ &= \mathbf{B} \frac{1}{\mathbf{B}} \cdot {\mathbf{A}} + \mathbf{B} {\left\langle{{\frac{1}{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2} \\ \end{aligned}

This is

\begin{aligned}\mathbf{A} = \text{Proj}_{\mathbf{B}}(\mathbf{A}) + \text{Rej}_{\mathbf{B}}(\mathbf{A}) \end{aligned} \quad\quad\quad(22)

It’s worth verifying that these two terms are orthogonal (with respect to the grade two vector dot product)

\begin{aligned}\text{Proj}_{\mathbf{B}}(\mathbf{A}) \cdot \text{Rej}_{\mathbf{B}}(\mathbf{A})&= \left\langle{{ \text{Proj}_{\mathbf{B}}(\mathbf{A}) \text{Rej}_{\mathbf{B}}(\mathbf{A}) }}\right\rangle \\ &= \left\langle{{ \mathbf{A} \cdot \frac{1}{\mathbf{B}} \mathbf{B} \mathbf{B} {\left\langle{{\frac{1}{\mathbf{B}}\mathbf{A}}}\right\rangle}_{2} }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ (\mathbf{A}\mathbf{B} + \mathbf{B}\mathbf{A})(\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B}) }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ \mathbf{A}\mathbf{B}\mathbf{B}\mathbf{A} -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} +\mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} -\mathbf{B}\mathbf{A}\mathbf{A}\mathbf{B} }}\right\rangle \\ &= \frac{1}{{4\mathbf{B}^2}}\left\langle{{ -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} +\mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} }}\right\rangle \\ \end{aligned}

Since we have introduced the restriction $\mathbf{A} \wedge \mathbf{B} \ne 0$, we can use the dot product to reorder product terms:

\begin{aligned}\mathbf{A}\mathbf{B} = -\mathbf{B}\mathbf{A} + 2 \mathbf{A} \cdot \mathbf{B}\end{aligned}

This can be used to reduce the grade zero term above:

\begin{aligned}\left\langle{{ \mathbf{B}\mathbf{A}\mathbf{B}\mathbf{A} -\mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} }}\right\rangle&= \left\langle{{ \mathbf{B}\mathbf{A}(-\mathbf{A}\mathbf{B} + 2 \mathbf{A} \cdot \mathbf{B}) -(-\mathbf{B}\mathbf{A} + 2 \mathbf{A} \cdot \mathbf{B})\mathbf{A}\mathbf{B} }}\right\rangle \\ &= + 2 (\mathbf{A} \cdot \mathbf{B})\left\langle{{\mathbf{B}\mathbf{A} - \mathbf{A}\mathbf{B} }}\right\rangle \\ &= + 4 (\mathbf{A} \cdot \mathbf{B})\left\langle{{{\left\langle{{\mathbf{B}\mathbf{A}}}\right\rangle}_{2}}}\right\rangle \\ &= 0 \\ \end{aligned}

This proves orthogonality as expected.

## Grade two term as a generator of rotations.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{planerejection}
\caption{Bivector rejection. Perpendicular component of plane.}
\end{figure}

Figure \ref{fig:planerejection} illustrates how the grade 2 component of the bivector product acts as a rotation in the rejection operation.

Provided that $\mathbf{A}$ and $\mathbf{B}$ are not coplanar, ${\left\langle{{\mathbf{A}\mathbf{B}}}\right\rangle}_{2}$ is a plane mutually perpendicular to both.

Given two mutually perpendicular unit bivectors ${\mathbf{A}}$ and ${\mathbf{B}}$, we can in fact write:

\begin{aligned}{\mathbf{B}} = {\mathbf{A}}{\left\langle{{\mathbf{B}{\mathbf{A}}}}\right\rangle}_{2}\end{aligned}

\begin{aligned}{\mathbf{B}} = {\left\langle{{\mathbf{A}{\mathbf{B}}}}\right\rangle}_{2}{\mathbf{A}}\end{aligned}

Compare this to a unit bivector for two mutually perpendicular vectors:

\begin{aligned}\mathbf{b} = \mathbf{a} (\mathbf{a} \wedge \mathbf{b})\end{aligned}

\begin{aligned}\mathbf{b} = (\mathbf{b} \wedge \mathbf{a}) \mathbf{a}\end{aligned}

In both cases, the unit bivector functions as an imaginary number, applying a rotation of $\pi/2$ rotating one of the perpendicular entities onto the other.

As with vectors one can split the rotation of the unit bivector into half angle left and right rotations. For example, for the same mutually perpendicular pair of bivectors one can write

\begin{aligned}\mathbf{B} &= \mathbf{A}{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2} \\ &= \mathbf{A} e^{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/2} \\ &= e^{-{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/4} \mathbf{A} e^{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}\pi/4} \\ &= \left(\frac{1}{{\sqrt{2}}}(1 - \mathbf{B} \mathbf{A})\right) \mathbf{A} \left(\frac{1}{{\sqrt{2}}}(1 + \mathbf{B} \mathbf{A}) \right) \\ \end{aligned}

Direct multiplication can be used to verify that this does in fact produce the desired result.

In general, writing

\begin{aligned}\mathbf{i} = \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}\end{aligned}

the rotation of plane $\mathbf{B}$ towards $\mathbf{A}$ by angle $\theta$ can be expressed with either a single sided full angle

\begin{aligned}\text{Rot}_{\theta: \mathbf{A} \rightarrow \mathbf{B}}(\mathbf{A}) &= \mathbf{A} e^{\mathbf{i} \theta} \\ &= e^{-\mathbf{i} \theta} \mathbf{A} \\ \end{aligned}

or double sided the half angle rotor formulas:

\begin{aligned}\text{Rot}_{\theta: \mathbf{A} \rightarrow \mathbf{B}}(\mathbf{A}) = e^{-\mathbf{i} \theta/2} \mathbf{A} e^{\mathbf{i} \theta/2} = \mathbf{R}^\dagger \mathbf{A} \mathbf{R}\end{aligned} \quad\quad\quad(23)

Where:

\begin{aligned}\mathbf{R} &= e^{\mathbf{i}\theta/2} \\ &= \cos(\theta/2) + \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}\sin(\theta/2) \\ \end{aligned}

As with half angle rotors applied to vectors, there are two possible orientations to rotate. Here the orientation of the rotation is such that the angle is measured along the minimal arc between the two, where the angle between the two is in the range $(0,\pi)$ as opposed to the $(\pi,2\pi)$ rotational direction.

## Angle between two intersecting planes.

Worth pointing out for comparison to the vector result, one can use the bivector dot product to calculate the angle between two intersecting planes. This angle of separation $\theta$ between the two can be expressed using the exponential:

\begin{aligned}\hat{\mathbf{B}} = \hat{\mathbf{A}} e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta}\end{aligned}

\begin{aligned}\implies-\hat{\mathbf{A}} \hat{\mathbf{B}} = e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta}\end{aligned}

Taking the grade zero terms of both sides we have:

\begin{aligned}-\left\langle{{\hat{\mathbf{A}} \hat{\mathbf{B}}}}\right\rangle = \left\langle{{ e^{ \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \theta} }}\right\rangle\end{aligned}

\begin{aligned}\implies\cos(\theta) = - \frac{\mathbf{A} \cdot \mathbf{B}}{{\left\lvert{\mathbf{A}}\right\rvert}{\left\lvert{\mathbf{B}}\right\rvert}}\end{aligned}

The sine can be obtained by selecting the grade two terms

\begin{aligned}-{\left\langle{{\hat{\mathbf{A}} \hat{\mathbf{B}}}}\right\rangle}_{2} = \frac{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}} \sin(\theta)\end{aligned}

\begin{aligned}\implies\sin(\theta) = \frac{{\left\lvert{{\left\langle{{\mathbf{B} \mathbf{A}}}\right\rangle}_{2}}\right\rvert}}{ {\left\lvert{\mathbf{A}}\right\rvert}{\left\lvert{\mathbf{B}}\right\rvert} }\end{aligned}

Note that the strictly positive sine result here is consistent with the fact that the angle is being measured such that it is in the
$(0,\pi)$ range.

## Rotation of an arbitrarily oriented plane.

As stated in a few of the GA books the rotor equation is a rotation representation that works for all grade vectors. Let’s verify this for the bivector case. Given a plane through the origin spanned by two direction vectors and rotated about the origin in a plane specified by unit magnitude rotor $\mathbf{R}$, the rotated plane will be specified by the wedge of the rotations applied to the two direction vectors. Let

\begin{aligned}\mathbf{A} = \mathbf{u} \wedge \mathbf{v}\end{aligned}

Then,

\begin{aligned}R(\mathbf{A}) &= R(\mathbf{u}) \wedge R(\mathbf{v}) \\ &= (\mathbf{R}^\dagger \mathbf{u} \mathbf{R}) \wedge (\mathbf{R}^\dagger \mathbf{v} \mathbf{R}) \\ &= \frac{1}{{2}}( \mathbf{R}^\dagger \mathbf{u} \mathbf{R} \mathbf{R}^\dagger \mathbf{v} \mathbf{R} - \mathbf{R}^\dagger \mathbf{v} \mathbf{R} \mathbf{R}^\dagger \mathbf{u} \mathbf{R}) \\ &= \frac{1}{{2}}( \mathbf{R}^\dagger \mathbf{u} \mathbf{v} \mathbf{R} - \mathbf{R}^\dagger \mathbf{v} \mathbf{u} \mathbf{R}) \\ &= \mathbf{R}^\dagger \frac{\mathbf{u} \mathbf{v} - \mathbf{v} \mathbf{u}}{2} \mathbf{R} \\ &= \mathbf{R}^\dagger \mathbf{u} \wedge \mathbf{v} \mathbf{R} \\ &= \mathbf{R}^\dagger \mathbf{A} \mathbf{R} \\ \end{aligned}

Observe that with this half angle double sided rotation equation, any component of $\mathbf{A}$ in the plane of rotation, or any component that does not intersect the plane of rotation, will be unchanged by the rotor since it will commute with it. In those cases the opposing sign half angle rotations will cancel out. Only the components of the plane that are perpendicular to the rotational plane will be changed by this rotation operation.

# A couple of reduction formula equivalents from $\mathbb{R}^{3}$ vector geometry.

The reduction of the $\mathbb{R}^{3}$ dot of cross products to dot products can be naturally derived using GA arguments. Writing $i$ as the $\mathbb{R}^{3}$ pseudoscalar we have:

\begin{aligned}( \mathbf{a} \times \mathbf{b} ) \cdot ( \mathbf{c} \times \mathbf{d} )&= \frac{\mathbf{a} \wedge \mathbf{b}}{i} \cdot \frac{\mathbf{c} \wedge \mathbf{d}}{i} \\ &= \frac{1}{{2}}\left( \frac{\mathbf{a} \wedge \mathbf{b}}{i} \frac{\mathbf{c} \wedge \mathbf{d}}{i} + \frac{\mathbf{c} \wedge \mathbf{d}}{i} \frac{\mathbf{a} \wedge \mathbf{b}}{i} \right) \\ &= -\frac{1}{{2}}\left( (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{c} \wedge \mathbf{d}) (\mathbf{a} \wedge \mathbf{b}) \right) \\ &= - (\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d}) - (\mathbf{a} \wedge \mathbf{b}) \wedge (\mathbf{c} \wedge \mathbf{d})\end{aligned}

In $\mathbb{R}^{3}$ this last term must be zero, thus one can write

\begin{aligned}( \mathbf{a} \times \mathbf{b} ) \cdot ( \mathbf{c} \times \mathbf{d} ) = -(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d})\end{aligned} \quad\quad\quad(24)

This is now in a form where it can be reduced to products of vector dot products.

\begin{aligned}(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d})&= \frac{1}{{2}}\left\langle{{ (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{c} \wedge \mathbf{d}) (\mathbf{a} \wedge \mathbf{b}) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ (\mathbf{a} \wedge \mathbf{b}) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) (\mathbf{b} \wedge \mathbf{a}) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ (\mathbf{a}\mathbf{b} - \mathbf{a} \cdot \mathbf{b} ) (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) (\mathbf{b} \mathbf{a} - \mathbf{b} \cdot \mathbf{a} ) }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a}\mathbf{b} (\mathbf{c} \wedge \mathbf{d}) + (\mathbf{d} \wedge \mathbf{c}) \mathbf{b} \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} (\mathbf{b} \cdot (\mathbf{c} \wedge \mathbf{d}) + \mathbf{b} \wedge (\mathbf{c} \wedge \mathbf{d})) ( (\mathbf{d} \wedge \mathbf{c}) \cdot \mathbf{b} + (\mathbf{d} \wedge \mathbf{c}) \wedge \mathbf{b}) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} (\mathbf{b} \cdot (\mathbf{c} \wedge \mathbf{d})) + ( (\mathbf{d} \wedge \mathbf{c}) \cdot \mathbf{b} ) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}\left\langle{{ \mathbf{a} ( (\mathbf{b} \cdot \mathbf{c}) \mathbf{d} - (\mathbf{b} \cdot \mathbf{d}) \mathbf{c} ) + ( \mathbf{d} (\mathbf{c} \cdot \mathbf{b}) - \mathbf{c} (\mathbf{d} \cdot \mathbf{b}) ) \mathbf{a} }}\right\rangle \\ &= \frac{1}{{2}}( ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{b} \cdot \mathbf{d} ) ( \mathbf{a} \cdot \mathbf{c} ) + ( \mathbf{d} \cdot \mathbf{a} ) ( \mathbf{c} \cdot \mathbf{b} ) - ( \mathbf{c} \cdot \mathbf{a} ) ( \mathbf{d} \cdot \mathbf{b} ) ) \\ &= ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{d} ) \\ \end{aligned}

Summarizing with a comparison to the $\mathbb{R}^{3}$ relations we have:

\begin{aligned}(\mathbf{a} \wedge \mathbf{b}) \cdot (\mathbf{c} \wedge \mathbf{d}) = -(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = ( \mathbf{a} \cdot \mathbf{d} ) ( \mathbf{b} \cdot \mathbf{c} ) - ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{d} )\end{aligned} \quad\quad\quad(25)

\begin{aligned}(\mathbf{a} \wedge \mathbf{c}) \cdot (\mathbf{b} \wedge \mathbf{c}) = -(\mathbf{a} \times \mathbf{c}) \cdot (\mathbf{b} \times \mathbf{c}) = ( \mathbf{a} \cdot \mathbf{c} ) ( \mathbf{b} \cdot \mathbf{c} ) - \mathbf{c}^2 ( \mathbf{a} \cdot \mathbf{b} )\end{aligned} \quad\quad\quad(26)

The bivector relations hold for all of $\mathbb{R}^{N}$.