# Peeter Joot's (OLD) Blog.

• ## Recent Comments

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 295,665

# Archive for October, 2010

## New version of my Geometric Algebra notes compilation posted.

Posted by peeterjoot on October 31, 2010

New versions of my Geometric Algebra Notes and my miscellaneous non-Geometric Algebra physics notes are now posted.

Changes since the last posting likely include the incorporation of the following individual notes:

Oct 30, 2010 Multivector commutators and Lorentz boosts.
Use of commutator and anticommutator to find components of a multivector that are effected by a Lorentz boost. Utilize this to boost the electrodynamic field bivector, and show how a small velocity introduction perpendicular to the a electrostatics field results in a specific magnetic field. ie. consider the magnetic field seen by the electron as it orbits a proton.

Oct 23, 2010 PHY356 Problem Set II.
A couple more problems from my QM1 course.

Oct 22, 2010 Classical Electrodynamic gauge interaction.
Momentum and Energy transformation to derive Lorentz force law from a free particle Hamiltonian.

Oct 20, 2010 Derivation of the spherical polar Laplacian
A derivation of the spherical polar Laplacian.

Oct 10, 2010 Notes and problems for Desai chapter IV.
Chapter IV Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

Oct 7, 2010 PHY356 Problem Set I.
A couple problems from my QM1 course.

Oct 1, 2010 Notes and problems for Desai chapter III.
Chapter III Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

Sept 27, 2010 Unitary exponential sandwich
Unitary transformation using anticommutators.

Sept 19, 2010 Desai Chapter II notes and problems.
Chapter II Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

July 27, 2010 Rotations using matrix exponentials
Calculating the exponential form for a unitary operator. A unitary operator can be expressed as the exponential of a Hermitian operator. Show how this can be calculated for the matrix representation of an operator. Explicitly calculate this matrix for a plane rotionation yields one of the Pauli spin matrices. While not unitary, the same procedure can be used to calculate such a rotation like angle for a Lorentz boost, and we also find that the result can be expressed in terms of one of the Pauli spin matrices.

July 23, 2010 Dirac Notation Ponderings.
Chapter 1 solutions and some associated notes.

June 25, 2010 More problems from Liboff chapter 4
Liboff problems 4.11, 4.12, 4.14

June 19, 2010 Hoop and spring oscillator problem.
A linear appromation to a hoop and spring problem.

May 31, 2010 Infinite square well wavefunction.
A QM problem from Liboff chapter 4.

May 30, 2010 On commutation of exponentials
Show that commutation of exponentials occurs if exponentiated terms also commute.

May 29, 2010 Fourier transformation of the Pauli QED wave equation (Take I).
Unsuccessful attempt to find a solution to the Pauli QM Hamiltonian using Fourier transforms. Also try to figure out the notation from the Feynman book where I saw this.

May 28, 2010 Errata for Feynman’s Quantum Electrodynamics (Addison-Wesley)?
My collection of errata notes for some Feynman lecture notes on QED compiled by a student.

May 23, 2010 Effect of sinusoid operators
Liboff, problem 3.19.

May 23, 2010 Time evolution of some wave functions
Liboff, problem 3.14.

May 15, 2010 Center of mass of a toroidal segment.
Calculate the volume element for a toroidal segment, and then the center of mass. This is a nice application of bivector rotation exponentials.

Mar 7, 2010 Newton’s method for intersection of curves in a plane.
Refresh my memory on Newton’s method. Then take the same idea and apply it to finding the intersection of two arbitrary curves in a plane. This provides a nice example for the use of the wedge product in linear system solutions. Curiously, the more general result for the iteration of an intersection estimate is tidier and prettier than that of a curve with a line.

Mar 3, 2010 Notes on Goldstein’s Routh’s procedure.
Puzzle through Routh’s procedure as outlined in Goldstein.

Feb 19, 2010 1D forced harmonic oscillator. Quick solution of non-homogeneous problem.
Solve the one dimensional harmonic oscillator problem using matrix methods.

Jan 1, 2010 Integrating the equation of motion for a one dimensional problem.
Solve for time for an arbitary one dimensional potential.

Dec 21, 2009 Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.
Fourier transform instead of series treatment of the previous, determining the Hamiltonian like energy expression for a wave packet.

Dec 16, 2009 Electrodynamic field energy for vacuum.
Apply the previous complex energy momentum tensor results to the calculation that Bohm does in his QM book for vacuum energy of a periodic electromagnetic field. I’d tried to do this a couple times using complex exponentials and never really gotten it right because of attempting to use the pseudoscalar as the imaginary for the phasors, instead of introducing a completely separate commuting imaginary. The end result is an energy expression for the volume element that has the structure of a mechanical Hamiltonian.

Dec 13, 2009 Energy and momentum for Complex electric and magnetic field phasors.
Work out the conservation equations for the energy and Poynting vectors in a complex representation. This fills in some gaps in Jackson, but tackles the problem from a GA starting point.

Dec 6, 2009 Jacobians and spherical polar gradient.

Dec 1, 2009 Polar form for the gradient and Laplacian.
Explore a chain rule derivation of the polar form of the Laplacian, and the validity of my old First year Proffessor’s statements about divergence of the gradient being the only way to express the general Laplacian. His insistence that the grad dot grad not being generally valid is reconciled here with reality, and the key is that the action on the unit vectors also has to be considered.

Nov 15, 2009 Force free relativistic motion.

Nov 11, 2009 question on elliptic function paper.

Nov 4, 2009 Spherical polar pendulum for one and multiple masses (Take II)
The constraints required to derive the equations of motion from a bivector parameterized Lagrangian for the multiple spherical pendulum make the problem considerably more complicated than would be the case with a plain scalar parameterization. Take the previous multiple spherical pendulum and rework it with only scalar spherical polar angles. I later rework this once more removing all the geometric algebra, which simplifies it further.

Oct 27, 2009 Spherical polar pendulum for one and multiple masses, and multivector Euler-Lagrange formulation.
Derive the multivector Euler-Lagrange relationships. These were given in Doran/Lasenby but I did not understand it there. Apply these to the multiple spherical pendulum with the Lagrangian expressed in terms of a bivector angle containing all the phi dependence a scalar polar angle.

Sept 26, 2009 Hamiltonian notes.

Sept 24, 2009 Electromagnetic Gauge invariance.
Show the gauge invariance of the Lorentz force equations. Start with the four vector representation since these transformation relations are simpler there and then show the invariance in the explicit space and time representation.

Sept 22, 2009 Lorentz force from Lagrangian (non-covariant)
Show that the non-covariant Lagrangian from Jackson does produce the Lorentz force law (an exersize for the reader).

Sept 20, 2009 Spherical Polar unit vectors in exponential form.
An exponential representation of spherical polar unit vectors. This was observed when considering the bivector form of the angular momentum operator, and is reiterated here independent of any quantum mechanical context.

Sept 13, 2009 Relativistic classical proton electron interaction.
An attempt to setup (but not yet solve) the equations for relativistically correct proton electron interaction.

Sept 10, 2009 Decoding the Merced Florez article.

Sept 6, 2009 bivectorSelectWrong

Sept 6, 2009 Bivector grades of the squared angular momentum operator.
The squared angular momentum operator can potentially have scalar, bivector, and (four) pseudoscalar components (depending on the dimension of the space). Here just the bivector grades of that product are calculated. With this the complete factorization of the Laplacian can be obtained.

Sept 5, 2009 Maxwell Lagrangian, rotation of coordinates.

Sept 4, 2009 Translation and rotation Noether field currents.
Review Lagrangian field concepts. Derive the field versions of the Euler-Lagrange equations. Calculate the conserved current and conservation law, a divergence, for a Lagrangian with a single parameter symmetry (such as rotation or boost by a scalar angle or rapidity). Next, spacetime symmetries are considered, starting with the question of the symmetry existance, then a calculation of the canonical energy momentum tensor and its associated divergence relation. Next an attempt to use a similar procedure to calculate a conserved current for an incremental spacetime translation. A divergence relation is found, but it is not a conservation relationship having a nonzero difference of energy momentum tensors.

Aug 31, 2009 Generator of rotations in arbitrary dimensions.
Similar to the exponential translation operator, the exponential operator that generates rotations is derived. Geometric Algebra is used (with an attempt to make this somewhat understandable without a lot of GA background). Explicit coordinate expansion is also covered, as well as a comparison to how the same derivation technique could be done with matrix only methods. The results obtained apply to Euclidean and other metrics and also to all dimensions, both 2D and greater or equal to 3D (unlike the cross product form).

Aug 20, 2009 Introduction to Geometric Algebra.

Aug 16, 2009 Graphical representation of Spherical Harmonics for $l=1$
Observations that the first set of spherical harmonic associated Legendre eigenfunctions have a natural representation as projections from rotated spherical polar rotation points.

Aug 14, 2009 (INCOMPLETE) Geometry of Maxwell radiation solutions
After having some trouble with pseudoscalar phasor representations of the wave equation, step back and examine the geometry that these require. Find that the use of $I\zcap$ for the imaginary means that only transverse solutions can be encoded.

Aug 11, 2009 Dot product of vector and bivector

Aug 11, 2009 Dot product of vector and blade

Aug 10, 2009 Covariant Maxwell equation in media
Formulate the Maxwell equation in media (from Jackson) without an explicit spacetime split.

## Multivector commutators and Lorentz boosts.

Posted by peeterjoot on October 31, 2010

[Click here for a PDF of this post with nicer formatting]

# Motivation.

In some reading there I found that the electrodynamic field components transform in a reversed sense to that of vectors, where instead of the perpendicular to the boost direction remaining unaffected, those are the parts that are altered.

To explore this, look at the Lorentz boost action on a multivector, utilizing symmetric and antisymmetric products to split that vector into portions effected and unaffected by the boost. For the bivector (electrodynamic case) and the four vector case, examine how these map to dot and wedge (or cross) products.

The underlying motivator for this boost consideration is an attempt to see where equation (6.70) of [1] comes from. We get to this by the very end.

# Guts.

## Structure of the bivector boost.

Recall that we can write our Lorentz boost in exponential form with

\begin{aligned}L &= e^{\alpha \boldsymbol{\sigma}/2} \\ X' &= L^\dagger X L,\end{aligned} \hspace{\stretch{1}}(2.1)

where $\boldsymbol{\sigma}$ is a spatial vector. This works for our bivector field too, assuming the composite transformation is an outermorphism of the transformed four vectors. Applying the boost to both the gradient and the potential our transformed field is then

\begin{aligned}F' &= \nabla' \wedge A' \\ &= (L^\dagger \nabla L) \wedge (L^\dagger A L) \\ &= \frac{1}{{2}} \left((L^\dagger \stackrel{ \rightarrow }{\nabla} L) (L^\dagger A L) -(L^\dagger A L) (L^\dagger \stackrel{ \leftarrow }{\nabla} L)\right) \\ &= \frac{1}{{2}} L^\dagger \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) L \\ &= L^\dagger (\nabla \wedge A) L.\end{aligned}

Note that arrows were used briefly to indicate that the partials of the gradient are still acting on $A$ despite their vector components being to one side. We are left with the very simple transformation rule

\begin{aligned}F' = L^\dagger F L,\end{aligned} \hspace{\stretch{1}}(2.3)

which has exactly the same structure as the four vector boost.

## Employing the commutator and anticommutator to find the parallel and perpendicular components.

If we apply the boost to a four vector, those components of the four vector that commute with the spatial direction $\boldsymbol{\sigma}$ are unaffected. As an example, which also serves to ensure we have the sign of the rapidity angle $\alpha$ correct, consider $\boldsymbol{\sigma} = \boldsymbol{\sigma}_1$. We have

\begin{aligned}X' = e^{-\alpha \boldsymbol{\sigma}/2} ( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 ) (\cosh \alpha/2 + \gamma_1 \gamma_0 \sinh \alpha/2 )\end{aligned} \hspace{\stretch{1}}(2.4)

We observe that the scalar and $\boldsymbol{\sigma}_1 = \gamma_1 \gamma_0$ components of the exponential commute with $\gamma_2$ and $\gamma_3$ since there is no vector in common, but that $\boldsymbol{\sigma}_1$ anticommutes with $\gamma_0$ and $\gamma_1$. We can therefore write

\begin{aligned}X' &= x^2 \gamma_2 + x^3 \gamma_3 +( x^0 \gamma_0 + x^1 \gamma_1 + ) (\cosh \alpha + \gamma_1 \gamma_0 \sinh \alpha ) \\ &= x^2 \gamma_2 + x^3 \gamma_3 +\gamma_0 ( x^0 \cosh\alpha - x^1 \sinh \alpha )+ \gamma_1 ( x^1 \cosh\alpha - x^0 \sinh \alpha )\end{aligned}

reproducing the familiar matrix result should we choose to write it out. How can we express the commutation property without resorting to components. We could write the four vector as a spatial and timelike component, as in

\begin{aligned}X = x^0 \gamma_0 + \mathbf{x} \gamma_0,\end{aligned} \hspace{\stretch{1}}(2.5)

and further separate that into components parallel and perpendicular to the spatial unit vector $\boldsymbol{\sigma}$ as

\begin{aligned}X = x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 + (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0.\end{aligned} \hspace{\stretch{1}}(2.6)

However, it would be nicer to group the first two terms together, since they are ones that are affected by the transformation. It would also be nice to not have to resort to spatial dot and wedge products, since we get into trouble too easily if we try to mix dot and wedge products of four vector and spatial vector components.

What we can do is employ symmetric and antisymmetric products (the anticommutator and commutator respectively). Recall that we can write any multivector product this way, and in particular

\begin{aligned}M \boldsymbol{\sigma} = \frac{1}{{2}} (M \boldsymbol{\sigma} + \boldsymbol{\sigma} M) + \frac{1}{{2}} (M \boldsymbol{\sigma} - \boldsymbol{\sigma} M).\end{aligned} \hspace{\stretch{1}}(2.7)

Left multiplying by the unit spatial vector $\boldsymbol{\sigma}$ we have

\begin{aligned}M = \frac{1}{{2}} (M + \boldsymbol{\sigma} M \boldsymbol{\sigma}) + \frac{1}{{2}} (M - \boldsymbol{\sigma} M \boldsymbol{\sigma}) = \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.8)

When $M = \mathbf{a}$ is a spatial vector this is our familiar split into parallel and perpendicular components with the respective projection and rejection operators

\begin{aligned}\mathbf{a} = \frac{1}{{2}} \left\{\mathbf{a},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{\mathbf{a}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} = (\mathbf{a} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + (\mathbf{a} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.9)

However, the more general split employing symmetric and antisymmetric products in 2.8, is something we can use for our four vector and bivector objects too.

Observe that we have the commutation and anti-commutation relationships

\begin{aligned}\left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= \boldsymbol{\sigma} \left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \\ \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= -\boldsymbol{\sigma} \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right).\end{aligned} \hspace{\stretch{1}}(2.10)

This split therefore serves to separate the multivector object in question nicely into the portions that are acted on by the Lorentz boost, or left unaffected.

## Application of the symmetric and antisymmetric split to the bivector field.

Let’s apply 2.8 to the spacetime event $X$ again with an x-axis boost $\sigma = \sigma_1$. The anticommutator portion of X in this boost direction is

\begin{aligned}\frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}_1}\right\} \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)+\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^2 \gamma_2 + x^3 \gamma_3,\end{aligned}

whereas the commutator portion gives us

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}_1}\right] \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)-\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^0 \gamma_0 + x^1 \gamma_1.\end{aligned}

We’ve seen that only these commutator portions are acted on by the boost. We have therefore found the desired logical grouping of the four vector $X$ into portions that are left unchanged by the boost and those that are affected. That is

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \\ \frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \end{aligned} \hspace{\stretch{1}}(2.12)

Let’s now return to the bivector field $F = \nabla \wedge A = \mathbf{E} + I c \mathbf{B}$, and split that multivector into boostable and unboostable portions with the commutator and anticommutator respectively.

Observing that our pseudoscalar $I$ commutes with all spatial vectors we have for the anticommutator parts that will not be affected by the boost

\begin{aligned}\frac{1}{{2}} \left\{{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma},\end{aligned} \hspace{\stretch{1}}(2.14)

and for the components that will be boosted we have

\begin{aligned}\frac{1}{{2}} \left[{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.15)

For the four vector case we saw that the components that lay “perpendicular” to the boost direction, were unaffected by the boost. For the field we see the opposite, and the components of the individual electric and magnetic fields that are parallel to the boost direction are unaffected.

Our boosted field is therefore

\begin{aligned}F' = (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma}+ \left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)\end{aligned} \hspace{\stretch{1}}(2.16)

Focusing on just the non-parallel terms we have

\begin{aligned}\left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)&=(\mathbf{E}_\perp + I c \mathbf{B}_\perp ) \cosh\alpha+(I \mathbf{E} \times \boldsymbol{\sigma} - c \mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha \\ &=\mathbf{E}_\perp \cosh\alpha - c (\mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha + I ( c \mathbf{B}_\perp \cosh\alpha + (\mathbf{E} \times \boldsymbol{\sigma}) \sinh\alpha ) \\ &=\gamma \left(\mathbf{E}_\perp - c (\mathbf{B} \times \boldsymbol{\sigma} ) {\left\lvert{\mathbf{v}}\right\rvert}/c+ I ( c \mathbf{B}_\perp + (\mathbf{E} \times \boldsymbol{\sigma}) {\left\lvert{\mathbf{v}}\right\rvert}/c) \right)\end{aligned}

A final regrouping gives us

\begin{aligned}F'&=\mathbf{E}_\parallel + \gamma \left( \mathbf{E}_\perp - \mathbf{B} \times \mathbf{v} \right)+I c \left( \mathbf{B}_\parallel + \gamma \left( \mathbf{B}_\perp + \mathbf{E} \times \mathbf{v}/c^2 \right) \right)\end{aligned} \hspace{\stretch{1}}(2.17)

In particular when we consider the proton, electron system as in equation (6.70) of [1] where it is stated that the electron will feel a magnetic field given by

\begin{aligned}\mathbf{B} = - \frac{\mathbf{v}}{c} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.18)

we can see where this comes from. If $F = \mathbf{E} + I c (0)$ is the field acting on the electron, then application of a $\mathbf{v}$ boost to the electron perpendicular to the field (ie: radial motion), we get

\begin{aligned}F' = I c \gamma \mathbf{E} \times \mathbf{v}/c^2 =-I c \gamma \frac{\mathbf{v}}{c^2} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.19)

We also have an additional $1/c$ factor in our result, but that’s a consequence of the choice of units where the dimensions of $\mathbf{E}$ match $c \mathbf{B}$, whereas in the text we have $\mathbf{E}$ and $\mathbf{B}$ in the same units. We also have an additional $\gamma$ factor, so we must presume that ${\left\lvert{\mathbf{v}}\right\rvert} << c$ in this portion of the text. That is actually a requirement here, for if the electron was already in motion, we'd have to boost a field that also included a magnetic component. A consequence of this is that the final interaction Hamiltonian of (6.75) is necessarily non-relativistic.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY356F (UofT Quantum Mechanics I) lecture notes for Oct 26.

Posted by peeterjoot on October 26, 2010

[Click here for a PDF of this post with nicer formatting]

# Oct 26.

Short class today since 43 minutes was wasted since the feedback given the Prof was so harsh that he wants to cancel the mid-term because students have said they aren’t prepared. How ironic that this wastes more time that could be getting us prepared!

## Chapter I

Why do this (Dirac notation) math? Because of the Stern-Gerlach Experiment. Explaining the SG experiment is just not possible with wave functions and the “old style” Sch equation that operates on wave functions

\begin{aligned}- \frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \Psi(\mathbf{x},t) + V(\mathbf{x}) \Psi(\mathbf{x},t) = i \hbar \frac{\partial {\Psi(\mathbf{x},t)}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(3.55)

Review all of Chapter I so that you understand the idea of a Hermitian operator and its associated eigenvalues and eigenvectors.

Hermitian operation $A$ is associated with a measurable quantity.

Example. $S_z$ is associated with the measurement of “spin-up” ${\lvert {z+} \rangle}$ or “spin-down” ${\lvert {z-} \rangle}$ states in silver atoms in the SG experiment.

Each operator has associated with it a set of eigenvalues, and those eigenvalues are the outcomes of possible measurements.

$S_z$ can be represented as

\begin{aligned}S_z = \frac{\hbar}{2}\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.56)

or

\begin{aligned}S_z = \frac{\hbar}{2}\left( {\lvert {z+} \rangle}{\langle {z+} \rvert} - {\lvert {z-} \rangle}{\lvert {z-} \rangle} \right).\end{aligned} \hspace{\stretch{1}}(3.57)

Find the eigenvalues of $S_z$ in order to establish the possible outcomes of measurements of the z-component of the intrinsic angular momentum.

This is the point of the course. It is to find the possible outcomes. You have to appreciate that the measurement in the SG experiment are trying to find the possible outcomes of the z-component measurement. The eigenvalues of this operator give us those possible outcomes.

### Example problem. What if you put a brick in the experiment?

In the SG experiment the “spin down” along the z-direction are atoms are blocked. Diagram: silver going through a hole, with a brick between the detector and the spin-down location on the screen:

FIXME: scan it. Oct 26, Fig 1.

What is the probability of measuring an outcome of $+\hbar/2$ along the x-direction?

Recall from Chapter I

\begin{aligned}{\lvert {\phi} \rangle} = \sum_n c_n {\lvert {a_n} \rangle}\end{aligned} \hspace{\stretch{1}}(3.58)

We can express an arbitrary state ${\lvert {\phi} \rangle}$ in terms of basis vectors (could be eigenstates of an operator $A$, but could be for example the eigenstates of the operator $B$, say.) Note that here in physics we will only work with orthonormal basis sets. The generality . To calculate the $c_n's$ we take inner products

\begin{aligned}\left\langle{{a_m}} \vert {{\phi}}\right\rangle = \sum_n \left\langle{{a_m}} \vert {{a_n}}\right\rangle = \sum_n c_n \delta_{mn} = c_m\end{aligned} \hspace{\stretch{1}}(3.59)

The probability for measured outcome $a_m$ is

\begin{aligned}{\left\lvert{c_m}\right\rvert}^2 = {\left\lvert{ \left\langle{{a_m}} \vert {{\phi}}\right\rangle }\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.60)

In the end we have to appreciate that part of QM is figuring out what the possible outcomes are and the probabilities of those outcomes.

Appreciate that ${\lvert {\phi} \rangle} = {\lvert {z+} \rangle}$ in this case. This is a superposition of eigenstates of $S_z$. Why is it a superposition? Because one of the coefficients is 1, and the other is 0.

\begin{aligned}{\lvert {\phi} \rangle}= c_1 {\lvert {z+} \rangle}+ c_2 {\lvert {z-} \rangle}= c_1 {\lvert {z+} \rangle}+ 0 {\lvert {z-} \rangle}\end{aligned} \hspace{\stretch{1}}(3.61)

So

\begin{aligned}c_1 = 1\end{aligned} \hspace{\stretch{1}}(3.62)

recall that

\begin{aligned}S_z &= \frac{\hbar}{2}\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix} \\ {\lvert {z+} \rangle} &\leftrightarrow\begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {z+} \rangle} &\leftrightarrow\begin{bmatrix}0 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.63)

Also recall that

\begin{aligned}S_x &= \frac{\hbar}{2}\begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix} \\ {\lvert {x+} \rangle} &\leftrightarrow\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {x-} \rangle} &\leftrightarrow\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.66)

(with eigenvalues $\pm\hbar/2$).

These eigenvectors are expressed in terms of ${\lvert {z+} \rangle}$ and ${\lvert {z-} \rangle}$, so that

\begin{aligned}{\lvert {x+} \rangle}&=\frac{1}{{\sqrt{2}}} \left( {\lvert {z+} \rangle} + {\lvert {z-} \rangle} \right) \\ {\lvert {x-} \rangle}&=\frac{1}{{\sqrt{2}}} \left( {\lvert {z+} \rangle} - {\lvert {z-} \rangle} \right) .\end{aligned} \hspace{\stretch{1}}(3.69)

Outcome $+\hbar/2$ along the x-direction has an associated state ${\lvert {x+} \rangle}$. That probability is

\begin{aligned}{\left\lvert{\left\langle{{x+}} \vert {{\phi}}\right\rangle}\right\rvert}^2&={\left\lvert{\frac{1}{{\sqrt{2}}} \left( {\langle {z+} \rvert} + {\langle {z-} \rvert} \right) {\lvert {\phi} \rangle}}\right\rvert}^2 \\ &=\frac{1}{{2}}{\left\lvert{\left\langle{{z+}} \vert {{\phi}}\right\rangle + \left\langle{{z-}} \vert {{\phi}}\right\rangle}\right\rvert}^2 \\ &=\frac{1}{{2}}{\left\lvert{\left\langle{{z+}} \vert {{z+}}\right\rangle + \left\langle{{z-}} \vert {{z+}}\right\rangle}\right\rvert}^2 \\ &=\frac{1}{{2}}{\left\lvert{1 + 0}\right\rvert}^2 \\ &=\frac{1}{{2}}\end{aligned}

### Example problem variation. With a third splitter (SGZ)

The probability for outcome $+\hbar/2$ along z after the second SGZ magnets is

\begin{aligned}{\left\lvert{\left\langle{{z+}} \vert {{\phi'}}\right\rangle}\right\rvert}^2&={\left\lvert{\left\langle{{z+}} \vert {{x+}}\right\rangle}\right\rvert}^2 \\ &={\left\lvert{{\langle {z+} \rvert} \frac{1}{{\sqrt{2}}} \left( {\lvert {z+} \rangle} + {\lvert {z-} \rangle} \right) }\right\rvert}^2 \\ &=\frac{1}{{2}}{\left\lvert{\left\langle{{z+}} \vert {{z+}}\right\rangle + \left\langle{{z+}} \vert {{z-}}\right\rangle }\right\rvert}^2 \\ &=\frac{1}{{2}}\end{aligned}

My question: what’s the point of the brick when the second splitter is already only being fed by the “spin up” stream. Answer: just to ensure that the states are prepared in the expected way. If the beams are two close together, without the brick perhaps we end up with some spin up in the upper stream. Note that the beam separation here is on the order of centimeters. ie: imagine that it is hard to redirect just one of the beams to the next stage splitter without blocking one of the beams or else the next splitter inevitably gets fed with some of both. Might be nice to see a real picture of the SG apparatus complete with scale.

Why silver? Silver has 47 electrons, all of which but one are in spin pairs. Only the “outermost” electron is free to have independent spin.

Aside: Note that we have the term “Collapse” to describe the now-known state after measurement. There’s some debate about the applicability of this term, and the interpretation that this imposes. Will not be discussed here.

## On section 5.11, the complete wavefunction.

Aside: section 5.12 (Pauli exclusion principle and Fermi energy) excluded.

The complete wavefunction

\begin{aligned}{\lvert {\phi} \rangle} &= \text{the complete state of an atomic in the SG experiment} \\ &= {\lvert {u} \rangle} \otimes {\lvert {\chi} \rangle} \end{aligned}

We also write

\begin{aligned}{\lvert {u} \rangle} \otimes {\lvert {\chi} \rangle} &= {\lvert {u} \rangle} {\lvert {\chi} \rangle} \end{aligned} \hspace{\stretch{1}}(3.71)

where ${\lvert {u} \rangle}$ is associate with translation, and ${\lvert {\chi} \rangle}$ is associated with spin. This is a product state where the $\otimes$ is a symbol for states in two or more different spaces.

## Section 5.9, Projection operator.

From chapter 1, we have

\begin{aligned}P_n = {\lvert {a_n} \rangle} {\langle {a_n} \rvert}\end{aligned} \hspace{\stretch{1}}(3.72)

is called the projection operator. This is physically relavent. This takes a general state and gives you the component of that state associated with that eigenvector. Observe

\begin{aligned}P_n {\lvert {\phi} \rangle} &={\lvert {a_n} \rangle} \left\langle{{a_n}} \vert {{\phi}}\right\rangle =\end{aligned} \hspace{\stretch{1}}(3.73)

Projection operator for the ${\lvert {z+} \rangle}$ state

\begin{aligned}P_{z+} = {\lvert {z+} \rangle} {\langle {z+} \rvert}\end{aligned} \hspace{\stretch{1}}(3.74)

PROJECTION OPERATOR TO BE CONTINUED NEXT LECTURE.

## Classical Electrodynamic gauge interaction.

Posted by peeterjoot on October 22, 2010

[Click here for a PDF of this post with nicer formatting]

# Motivation.

In [1] chapter 6, we have a statement that in classical mechanics the electromagnetic interaction is due to a transformation of the following form

\begin{aligned}\mathbf{p} &\rightarrow \mathbf{p} - \frac{e}{c} \mathbf{A} \\ E &\rightarrow E - e \phi\end{aligned} \hspace{\stretch{1}}(1.1)

Let’s verify that this does produce the classical interaction law. Putting a more familiar label on this, we should see that we obtain the Lorentz force law from a transformation of the Hamiltonian.

# Hamiltonian equations.

Recall that the Hamiltonian was defined in terms of conjugate momentum components $p_k$ as

\begin{aligned}H(x_k, p_k) = \dot{x}_k p_k - \mathcal{L}(x_k, \dot{x}_k),\end{aligned} \hspace{\stretch{1}}(2.3)

we can take $x_k$ partials to obtain the first of the Hamiltonian system of equations for the motion

\begin{aligned}\frac{\partial {H}}{\partial {x_k}} &= - \frac{\partial {\mathcal{L}}}{\partial {x_k}} \\ &= - \frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{x}_k}} \end{aligned}

With $p_k \equiv {\partial {\mathcal{L}}}/{\partial {\dot{x}_k}}$, and taking $p_k$ partials too, we have the system of equations

\begin{subequations}

\begin{aligned} \frac{\partial {H}}{\partial {x_k}} &= - \frac{d p_k}{dt}\end{aligned} \hspace{\stretch{1}}(2.4a)

\begin{aligned} \frac{\partial {H}}{\partial {p_k}} &= \dot{x}_k\end{aligned} \hspace{\stretch{1}}(2.4b)

\end{subequations}

# Classical interaction

Starting with the free particle Hamiltonian

\begin{aligned}H = \frac{\mathbf{p}}{2m},\end{aligned} \hspace{\stretch{1}}(3.5)

we make the transformation required to both the energy and momentum terms

\begin{aligned}H - e\phi = \frac{\left(\mathbf{p} - \frac{e}{c} \mathbf{A}\right)^2 }{2m} = \frac{1}{{2m}} \mathbf{p}^2 - \frac{e}{m c} \mathbf{p} \cdot \mathbf{A} + \frac{1}{{2m}} \left(\frac{e}{c}\right)^2 \mathbf{A}^2 \end{aligned} \hspace{\stretch{1}}(3.6)

From 2.4b we find

\begin{aligned}\frac{d x_k}{dt} = \frac{\partial {H}}{\partial {p_k}} = \frac{1}{{m}} \left( p_k - \frac{e}{c} A_k \right),\end{aligned} \hspace{\stretch{1}}(3.7)

or

\begin{aligned}p_k = m \frac{d x_k}{dt} + \frac{e}{c} A_k.\end{aligned} \hspace{\stretch{1}}(3.8)

Taking derivatives and employing 2.4a we have

\begin{aligned}\frac{d p_k}{dt} &= m \frac{d^2 x_k}{dt^2} + \frac{e}{c} \frac{d A_k}{dt} \\ &= -\frac{\partial {H}}{\partial {x_k}} \\ &=\frac{1}{{m}} \frac{e}{c} p_n \frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}- \frac{1}{{m}} \left(\frac{e}{c}\right)^2 A_k \frac{\partial {A_k}}{\partial {x_k}} \\ &=\frac{1}{{m}} \frac{e}{c} \left(m \frac{d x_n}{dt} + \frac{e}{c} A_n\right)\frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}- \frac{1}{{m}} \left(\frac{e}{c}\right)^2 A_k \frac{\partial {A_k}}{\partial {x_k}} \\ &=\frac{e}{c} \frac{d x_n}{dt}\frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}\end{aligned}

Rearranging and utilizing the convective derivative expansion $d/dt = (d x_a/dt) {\partial {}}/{\partial {x_a}}$ (ie: chain rule), we have

\begin{aligned}m \frac{d^2 x_k}{dt^2} &=\frac{e}{c} \frac{d x_n}{dt}\left( \frac{\partial {A_n}}{\partial {x_k}}- \frac{\partial {A_k}}{\partial {x_n}} \right) - e \frac{\partial {\phi}}{\partial {x_k}}\end{aligned} \hspace{\stretch{1}}(3.9)

We guess and expect that the first term of 3.9 is $e (\mathbf{v}/c \times \mathbf{B})_k$. Let’s verify this

\begin{aligned}(\mathbf{v} \times \mathbf{B})_k&= \dot{x}_m B_d \epsilon_{k m d} \\ &= \dot{x}_m ( \epsilon_{d a b} \partial_a A_b ) \epsilon_{k m d} \\ &= \dot{x}_m \partial_a A_b \epsilon_{d a b} \epsilon_{d k m}\end{aligned}

Since $\epsilon_{d a b} \epsilon_{d k m} = \delta_{a k} \delta_{b m} - \delta_{a m} \delta_{b k}$ we have

\begin{aligned}(\mathbf{v} \times \mathbf{B})_k&= \dot{x}_m \partial_a A_b \epsilon_{d a b} \epsilon_{d k m} \\ &=\dot{x}_m \partial_a A_b \delta_{a k} \delta_{b m} -\dot{x}_m \partial_a A_b \delta_{a m} \delta_{b k} \\ &= \dot{x}_m ( \partial_k A_m - \partial_m A_k )\end{aligned}

Except for a difference in dummy summation variables, this matches what we had in 3.9. Thus we are able to put that into the traditional Lorentz force vector form

\begin{aligned}m \frac{d^2 \mathbf{x}}{dt^2} &= e \frac{\mathbf{v}}{c} \times \mathbf{B} + e \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(3.10)

It’s good to see that we get the classical interaction from this transformation before moving on to the trickier seeming QM interaction.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## A linker option to try out.

Posted by peeterjoot on October 21, 2010

Note to self:

Saw a linker .map file generated as a side effect of a bld today (some non-DB2 code) that was generated by:

g++ ... -Xlinker -Map -Xlinker foo.map


(linux, and the gnu-binutils linker).

I’ve used linker .map files on AIX countless times, and wished for the same in our linux builds (but didn’t think of looking for the option to generate something equivalent).

I want to try this out next time I have a link error where the unresolved symbol comes from one of the many static archives that we bundle into our product shared libs. Would this verbosity help with understanding the origin of the link error better?

Posted in C/C++ development and debugging. | Tagged: , | Leave a Comment »

## Oct 19, PHY356F lecture notes.

Posted by peeterjoot on October 20, 2010

[Click here for a PDF of this post with nicer formatting]

# Oct 19.

Last time, we started thinking about angular momentum. This time, we will examine orbital and intrinsic angular momentum.

Orbital angular momentum in classical physics and quantum physics is expressed as

\begin{aligned}\mathbf{L} &= \mathbf{r} \times \mathbf{p},\end{aligned} \hspace{\stretch{1}}(2.7)

and

\begin{aligned}\mathbf{L} &= \mathbf{R} \times \mathbf{P},\end{aligned} \hspace{\stretch{1}}(2.8)

where $\mathbf{R}$ and $\mathbf{P}$ are quantum mechanical operators corresponding to position and momentum

\begin{aligned}\mathbf{R} &= X \hat{\mathbf{x}} + Y \hat{\mathbf{y}} + Z \hat{\mathbf{z}} \\ \mathbf{P} &= P_x \hat{\mathbf{x}} + P_y \hat{\mathbf{y}} + P_z \hat{\mathbf{z}} \\ \mathbf{L} &= L_x \hat{\mathbf{x}} + L_y \hat{\mathbf{y}} + L_z \hat{\mathbf{z}}\end{aligned} \hspace{\stretch{1}}(2.9)

Practice problems:
\begin{itemize}
\item a) Determine the commutators $\left[{L_x},{L_y}\right], \left[{L_y},{L_z}\right], \left[{L_z},{L_x}\right]$ and

\begin{aligned}\left[{L_x},{L_y}\right]&=(r_y p_z -r_z p_y)(r_z p_x -r_x p_z)-(r_z p_x -r_x p_z)(r_y p_z -r_z p_y) \\ &=r_y p_z (r_z p_x -r_x p_z)-r_z p_y (r_z p_x -r_x p_z)- r_z p_x (r_y p_z -r_z p_y)+ r_x p_z (r_y p_z -r_z p_y) \\ &=r_y p_z r_z p_x-r_y p_z r_x p_z-r_z p_y r_z p_x+r_z p_y r_x p_z- r_z p_x r_y p_z+ r_z p_x r_z p_y+ r_x p_z r_y p_z- r_x p_z r_z p_y \\ \end{aligned}

With $p_i r_j = r_j p_i - i \hbar \delta_{ij}$, we have

\begin{aligned}\left[{L_x},{L_y}\right]&=r_y r_z p_z p_x-r_y r_z p_x p_z-r_z r_y p_z p_x+r_z r_y p_x p_z- r_z r_x p_y p_z+ r_z r_x p_z p_y+ r_x r_z p_y p_z- r_x r_z p_z p_y \\ &+-i \hbar \left(r_y p_x- r_x p_y \right)\end{aligned}

Since the $p_i p_j$ operators commute, all the first terms cancel, leaving just

\begin{aligned}\left[{L_x},{L_y}\right]&=i \hbar L_z\end{aligned}

\item b) $L_z$ in spherical coordinates.

The answer is

\begin{aligned}L_z \leftrightarrow -i \hbar \frac{\partial {}}{\partial {\phi}}\end{aligned} \hspace{\stretch{1}}(2.12)

Work through this.
\end{itemize}

Part of the task in this intro QM treatment is to carefully determine the eigenfunctions for these operators.

The spherical harmonics are given by $Y_{lm}(\theta, \phi)$ such that

\begin{aligned}Y_{lm}(\theta, \phi) \propto e^{i m \phi}\end{aligned} \hspace{\stretch{1}}(2.13)

\begin{aligned}L_z Y_{lm}(\theta, \phi)&= -i \hbar \frac{\partial {}}{\partial {\phi}} Y_{lm}(\theta, \phi) \\ &= -i \hbar \frac{\partial {}}{\partial {\phi}} \text{constants} (e^{im \phi}) \\ &= \hbar m \text{constants} e^{i m \phi} \\ &= \hbar m Y_{lm}(\theta, \phi)\end{aligned}

The z-component is quantized since, $m$ is an integer $m = 0, \pm 1, \pm 2, ...$. The total angular momentum

\begin{aligned}\mathbf{L}^2 = \mathbf{L} \cdot \mathbf{L} = L_x^2 + L_y^2 + L_z^2\end{aligned} \hspace{\stretch{1}}(2.14)

is also quantized (details in the book).

The eigenvalue properties here represent very important physical features. This is also important in the hydrogen atom problem. In the hydrogen atom problem, the particle is effectively free in the angular components, having only $r$ dependence. This allows us to apply the work for the free particle to our subsequent potential bounded solution.

Note that for the above, we also have the alternate, abstract ket notation, method of writing the eigenvalue behavior.

\begin{aligned}L_z {\lvert {lm} \rangle} = \hbar m {\lvert {lm} \rangle}\end{aligned} \hspace{\stretch{1}}(2.15)

Just like

\begin{aligned}X {\lvert {x} \rangle} &= x {\lvert {x} \rangle} \\ P {\lvert {p} \rangle} &= p {\lvert {p} \rangle}\end{aligned} \hspace{\stretch{1}}(2.16)

For the total angular momentum our spherical harmonic eigenfunctions have the property

\begin{aligned}\mathbf{L}^2 {\lvert {lm} \rangle} &= \hbar^2 l (l + 1){\lvert {l m} \rangle}\end{aligned} \hspace{\stretch{1}}(2.18)

with $l = 0, 1, 2, \cdots$.

Alternately in plain old non-abstract notation we can write this as

\begin{aligned}\mathbf{L}^2 Y_{lm}(\theta, \phi) &= \hbar^2 l (l + 1) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.19)

Now we can introduce the Raising and Lowering Operators, which are

\begin{aligned}L_{+} &= L_x + i L_y \\ L_{-} &= L_x - i L_y,\end{aligned} \hspace{\stretch{1}}(2.20)

respectively. These are abstract quantities, but also physically important since they relate quantum levels of the angular momentum. How do we show this?

Last time, we saw that

\begin{aligned}\left[{L_z},{L_{+}}\right] &= +\hbar L_{+} \\ \left[{L_z},{L_{-}}\right] &= -\hbar L_{-}\end{aligned} \hspace{\stretch{1}}(2.22)

Note that it is implied that we are operating on ket vectors

\begin{aligned}L_z (L_{-} {\lvert {lm} \rangle} )\end{aligned}

with

\begin{aligned}{\lvert {lm} \rangle} \leftrightarrow Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.24)

Question: What is $L_{-} {\lvert {lm} \rangle}$?

Substitute

\begin{aligned}L_z L_{-} - L_{-} L_z &= - \hbar L_{-} \\ \implies \\ L_z L_{-} &= L_{-} L_z - \hbar L_{-}\end{aligned}

\begin{aligned}L_z \left( L_{-} {\lvert {lm} \rangle} \right)&=L_{-} L_z {\lvert {lm} \rangle} - \hbar L_{-} {\lvert {lm} \rangle} \\ &=L_{-} m \hbar {\lvert {lm} \rangle} - L_{-} {\lvert {lm} \rangle} \\ &=\hbar \left( m L_{-} {\lvert {lm} \rangle} - L_{-} {\lvert {lm} \rangle} \right) \\ &=\hbar (m-1) \left( L_{-} {\lvert {lm} \rangle} \right)\end{aligned}

So $L_{-} {\lvert {lm} \rangle} = {\lvert {\psi} \rangle}$ is another spherical harmonic, and we have

\begin{aligned}L_z {\lvert {\psi} \rangle} &= \hbar (m-1) {\lvert {\psi} \rangle}\end{aligned} \hspace{\stretch{1}}(2.25)

This lowering operator quantity causes a physical change in the state of the system, lowering the observable state (ie: the eigenvalue) by $\hbar$.

Now we want to normalize ${\lvert {\psi} \rangle} = L_{-} {\lvert {lm} \rangle}$, via $\left\langle{{\psi}} \vert {{\psi}}\right\rangle = 1$.

\begin{aligned}1&= \left\langle{{\psi}} \vert {{\psi}}\right\rangle \\ &= {\langle {lm} \rvert} L_{-}^\dagger L_{-} {\lvert {\psi} \rangle} \\ &= {\langle {lm} \rvert} L_{+} L_{-} {\lvert {\psi} \rangle}\end{aligned}

We can use

\begin{aligned}L_{+} L_{-} = \mathbf{L}^2 - L_z^2 + \hbar L_z,\end{aligned} \hspace{\stretch{1}}(2.26)

So, knowing (how exactly?) that

\begin{aligned}L_{-} {\lvert {lm} \rangle} = C {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.27)

we have from 2.26

\begin{aligned}{\left\lvert{C}\right\rvert}^2&= {\langle {lm} \rvert} (\mathbf{L}^2 - L_z^2 + \hbar L_z ) {\lvert {\psi} \rangle} \\ &= \underbrace{\left\langle{{lm}} \vert {{lm}}\right\rangle}_{=1} \left(\hbar^2 l(l+1) - (\hbar m)^2 + \hbar^2 m \right) \\ &= \hbar^2 \left(l(l+1) - m^2 + m \right).\end{aligned}

we have

\begin{aligned}{\left\lvert{C}\right\rvert}^2 \underbrace{\left\langle{{l,m-1}} \vert {{l,m-1}}\right\rangle}_{=1}&= \hbar^2 \left(l(l+1) - m^2 + m \right).\end{aligned} \hspace{\stretch{1}}(2.28)

and can normalize the functions ${\lvert {\psi} \rangle}$ as

\begin{aligned}L_{-} {\lvert {lm} \rangle} &= \hbar \left(l(l+1) - m^2 + m \right)^{1/2} {\lvert {l, m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.29)

Abstract notation side note:

\begin{aligned}\left\langle{{\theta,\phi}} \vert {{lm}}\right\rangle = Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.30)

## Generalizing orbital angular momentum.

To explain the results of the Stern-Gerlach experiment, assume that there is an intrinsic angular momentum $\mathbf{S}$ that has most of the same properties as $\mathbf{L}$. But $\mathbf{S}$ has no classical counterpart such as $\mathbf{r} \times \mathbf{p}$.

This experiment is the classic QM experiment because the silver atoms not only have the orbital angular momentum, but also have an additional observed intrinsic spin in the outermost electron. In turns out that if you combine relativity and QM, you can get out something that looks like the the $\mathbf{S}$ operator. That marriage produces the Dirac electron theory.

We assume the commutation relations

\begin{aligned}\left[{S_x},{S_y}\right] &= i \hbar S_z \\ \left[{S_y},{S_z}\right] &= i \hbar S_x \\ \left[{S_z},{S_x}\right] &= i \hbar S_y\end{aligned} \hspace{\stretch{1}}(2.31)

Where we have the analogous eigenproperties

\begin{aligned}\mathbf{S}^2 {\lvert {sm} \rangle} &= \hbar^2 s(s+1) {\lvert {sm} \rangle} \\ S_z {\lvert {sm} \rangle} &= \hbar m {\lvert {sm} \rangle}\end{aligned} \hspace{\stretch{1}}(2.34)

with $s = 0, 1/2, 1, 3/2, ...$

Electrons and protons are examples of particles that have spin one half.

Note that there is no position representation of ${\lvert {sm} \rangle}$. We cannot project these states.

This basic quantum mechanics end up applying to quantum computing and cryptography as well, when we apply the mathematics we are learning here to explain the Stern-Gerlach experiment to photon spin states.

(DRAWS Stern-Gerlach picture with spin up and down labeled ${\lvert {z+} \rangle}$, and ${\lvert {z-} \rangle}$ with the magnetic field oriented in along the $z$ axis.)

Silver atoms have $s = 1/2$ and $m= \pm 1/2$, where $m$ is the quantum number associated with the z-direction intrinsic angular momentum. The angular momentum that is being acted on in the Stern-Gerlach experiment is primarily due to the outermost electron.

\begin{aligned}S_z {\lvert {z+} \rangle} &= \frac{\hbar}{2} {\lvert {z+} \rangle} \\ S_z {\lvert {z-} \rangle} &= -\frac{\hbar}{2} {\lvert {z-} \rangle} \\ \mathbf{S}^2 {\lvert {z\pm} \rangle} &= \frac{1}{{2}} \left( \frac{1}{{2}} + 1 \right) \hbar^2 {\lvert {z\pm} \rangle}\end{aligned} \hspace{\stretch{1}}(2.36)

where

\begin{aligned}{\lvert {z+} \rangle} &= {\lvert { \frac{1}{{2}} \frac{1}{{2}} } \rangle} \\ {\lvert {z-} \rangle} &= {\lvert { \frac{1}{{2}} -\frac{1}{{2}} } \rangle}\end{aligned} \hspace{\stretch{1}}(2.39)

What about $S_x$? We can leave the detector in the $x,z$ plane, but rotate the magnet so that it lies in the $x$ direction.

We have the correspondence

\begin{aligned}S_z \leftrightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.41)

but this is perhaps more properly viewed as the matrix representation of the less specific form

\begin{aligned}S_z = \frac{\hbar}{2} \left({\lvert {z+} \rangle} {\langle {z+} \rvert}-{\lvert {z-} \rangle} {\langle {z-} \rvert}\right).\end{aligned} \hspace{\stretch{1}}(2.42)

Where the translation to the form of 2.41 is via the matrix elements

\begin{aligned}\rvert} S_z {\lvert {z+} \rangle} \\ \rvert} S_z {\lvert {z-} \rangle} \\ \rvert} S_z {\lvert {z+} \rangle} \\ \rvert} S_z {\lvert {z-} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.43)

We can work out the same for $S_x$ using $S_{+}$ and $S_{-}$, or equivalently for $\sigma_x$ using $\sigma_{+}$ and $\sigma_{-}$, where

\begin{aligned}S_x &= \frac{\hbar}{2} \sigma_x \\ S_y &= \frac{\hbar}{2} \sigma_y \\ S_z &= \frac{\hbar}{2} \sigma_z\end{aligned} \hspace{\stretch{1}}(2.47)

The operators $\sigma_x, \sigma_y, \sigma_z$ are the Pauli operators, and avoid the pesky $\hbar/2$ factors.

We find

\begin{aligned}\sigma_x &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\ \sigma_y &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\ \sigma_z &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.50)

And from ${\left\lvert{\sigma_x - \lambda I}\right\rvert} = (-\lambda)^2 -1$, we have eigenvalues $\lambda = \pm 1$ for the $\sigma_x$ operator.

The corresponding eigenkets in column matrix notation are found

\begin{aligned}\begin{bmatrix}\mp 1 & 1 \\ 1 & \mp 1 \end{bmatrix}\begin{bmatrix}a_1 \\ a_2 \end{bmatrix}&= 0 \\ \implies \mp a_1 + a_2 &= 0 \\ \implies a_2 &= \pm a_1\end{aligned}

Or

\begin{aligned}{\lvert {x\pm} \rangle} \propto \begin{bmatrix}a_1 \\ a_2 \end{bmatrix}=a_1\begin{bmatrix}1 \\ \pm 1 \end{bmatrix}\end{aligned}

which can be normalized as

\begin{aligned}{\lvert {x\pm} \rangle} = \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ \pm 1 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.53)

We see that this is different from

\begin{aligned}{\lvert {z+} \rangle} = \begin{bmatrix}1 \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.54)

We will still end up with two spots, but there has been a projection of spin in a different fashion? Does this mean the measurement will be different. There’s still a lot more to learn before understanding exactly how to relate the spin operators to a real physical system.

## Oct 12, PHY356F lecture notes.

Posted by peeterjoot on October 12, 2010

Today as an experiment I tried taking live notes in latex in class (previously I’d not taken any notes since the lectures followed the text so closely).

[Click here for a PDF of this post with nicer formatting]

# Oct 12.

Review. What have we learned?

## Chapter 1.

Information about systems comes from vectors and operators. Express the vector ${\lvert {\phi} \rangle}$ describing the system in terms of eigenvectors ${\lvert {a_n} \rangle}$. $n \in 1,2,3,\cdots$.

of some operator $A$. What are the coefficients $c_n$? Act on both sides by ${\langle {a_m} \rvert}$ to find

\begin{aligned}\left\langle{{a_m}} \vert {{\phi}}\right\rangle &= \sum_n c_n \underbrace{\left\langle{{a_m}} \vert {{a_n}}\right\rangle}_{\text{Kronicker delta}} \\ &= \sum c_n \delta_{mn} \\ &= c_m\end{aligned}

\begin{aligned}c_m = \left\langle{{a_m}} \vert {{\phi}}\right\rangle\end{aligned}

Analogy

\begin{aligned}\mathbf{v} = \sum_i v_i \mathbf{e}_i \end{aligned}

\begin{aligned}\mathbf{e}_1 \cdot \mathbf{v} = \sum_i v_i \mathbf{e}_1 \cdot \mathbf{e}_i = v_1\end{aligned}

Physical information comes from the probability for obtaining a measurement of the physical entity associated with operator $A$. The probability of obtaining outcome $a_m$, an eigenvalue of $A$, is ${\left\lvert{c_n}\right\rvert}^2$

## Chapter 2.

Deal with operators that have continuous eigenvalues and eigenvectors.

We now express

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Here the coeffecients $f(k)$ are analogous to $c_n$.

Now if we project onto $k'$

\begin{aligned}\left\langle{{k'}} \vert {{\phi}}\right\rangle &= \int dk f(k) \underbrace{\left\langle{{k'}} \vert {{k}}\right\rangle}_{\text{Dirac delta}} \\ &= \int dk f(k) \delta(k' -k) \\ &= f(k') \end{aligned}

Unlike the discrete case, this is not a probability. Probability density for obtaining outcome $k'$ is ${\left\lvert{f(k')}\right\rvert}^2$.

Example 2.

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Now if we project x onto both sides

\begin{aligned}\left\langle{{x}} \vert {{\phi}}\right\rangle &= \int dk f(k) \left\langle{{x}} \vert {{k}}\right\rangle \\ \end{aligned}

With $\left\langle{{x}} \vert {{k}}\right\rangle = u_k(x)$

\begin{aligned}\phi(x) &\equiv \left\langle{{x}} \vert {{\phi}}\right\rangle \\ &= \int dk f(k) u_k(x) \\ &= \int dk f(k) \frac{1}{{\sqrt{L}}} e^{ikx}\end{aligned}

This is with periodic boundary value conditions for the normalization. The infinite normalization is also possible.

\begin{aligned}\phi(x) &= \frac{1}{{\sqrt{L}}} \int dk f(k) e^{ikx}\end{aligned}

Multiply both sides by $e^{-ik'x}/\sqrt{L}$ and integrate. This is analogous to multiplying ${\lvert {\phi} \rangle} = \int f(k) {\lvert {k} \rangle} dk$ by ${\langle {k'} \rvert}$. We get

\begin{aligned}\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx&= \frac{1}{{L}} \iint dk f(k) e^{i(k-k')x} dx \\ &= \int dk f(k) \Bigl( \frac{1}{{L}} \int e^{i(k-k')x} \Bigr) \\ &= \int dk f(k) \delta(k-k') \\ &= f(k')\end{aligned}

\begin{aligned}f(k') &=\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx\end{aligned}

We can talk about the state vector in terms of its position basis $\phi(x)$ or in the momentum space via Fourier transformation. This is the equivalent thing, but just expressed different. The question of interpretation in terms of probabilities works out the same. Either way we look at the probability density.

The quantity

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

is also called a wave packet state since it involves a superposition of many stats ${\lvert {k} \rangle}$. Example: See Fig 4.1 (Gaussian wave packet, with ${\left\lvert{\phi}\right\rvert}^2$ as the height). This wave packet is a snapshot of the wave function amplitude at one specific time instant. The evolution of this wave packet is governed by the Hamiltonian, which brings us to chapter 3.

## Chapter 3.

For

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

How do we find ${\lvert {\phi(t)} \rangle}$, the time evolved state? Here we have the option of choosing which of the pictures (Schr\”{o}dinger, Heisenberg, interaction) we deal with. Since the Heisenberg picture deals with time evolved operators, and the interaction picture with evolving Hamiltonian’s, neither of these is required to answer this question. Consider the Schr\”{o}dinger picture which gives

\begin{aligned}{\lvert {\phi(t)} \rangle} = \int dk f(k) {\lvert {k} \rangle} e^{-i E_k t/\hbar}\end{aligned}

where $E_k$ is the eigenvalue of the Hamiltonian operator $H$.

STRONG SEEMING HINT: If looking for additional problems and homework, consider in detail the time evolution of the Gaussian wave packet state.

## Chapter 4.

For three dimensions with $V(x,y,z) = 0$

\begin{aligned}H &= \frac{1}{{2m}} \mathbf{p}^2 \\ \mathbf{p} &= \sum_i p_i \mathbf{e}_i \\ \end{aligned}

In the position representation, where

\begin{aligned}p_i &= -i \hbar \frac{d}{dx_i}\end{aligned}

the Sch equation is

\begin{aligned}H u(x,y,z) &= E u(x,y,z) \\ H &= -\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \\ = -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial {x}^2}+\frac{\partial^2}{\partial {y}^2}+\frac{\partial^2}{\partial {z}^2}\right) \end{aligned}

Separation of variables assumes it is possible to let

\begin{aligned}u(x,y,z) = X(x) Y(y) Z(z)\end{aligned}

(these capital letters are functions, not operators).

\begin{aligned}-\frac{\hbar^2}{2m} \left( YZ \frac{\partial^2 X}{\partial {x}^2}+ XZ \frac{\partial^2 Y}{\partial {y}^2}+ YZ \frac{\partial^2 Z}{\partial {z}^2}\right)&= E X Y Z\end{aligned}

Dividing as usual by $XYZ$ we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2}+ \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {y}^2}+ \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {z}^2} \right)&= E \end{aligned}

The curious thing is that we have these three derivatives, which is supposed to be related to an Energy, which is independent of any $x,y,z$, so it must be that each of these is separately constant. We can separate these into three individual equations

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2} &= E_1 \\ -\frac{\hbar^2}{2m} \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {x}^2} &= E_2 \\ -\frac{\hbar^2}{2m} \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {x}^2} &= E_3\end{aligned}

or

\begin{aligned}\frac{\partial^2 X}{\partial {x}^2} &= \left( - \frac{2m E_1}{\hbar^2} \right) X \\ \frac{\partial^2 Y}{\partial {x}^2} &= \left( - \frac{2m E_2}{\hbar^2} \right) Y \\ \frac{\partial^2 Z}{\partial {x}^2} &= \left( - \frac{2m E_3}{\hbar^2} \right) Z \end{aligned}

We have then

\begin{aligned}X(x) = C_1 e^{i k x}\end{aligned}

with

\begin{aligned}E_1 &= \frac{\hbar^2 k_1^2 }{2m} = \frac{p_1^2}{2m} \\ E_2 &= \frac{\hbar^2 k_2^2 }{2m} = \frac{p_2^2}{2m} \\ E_3 &= \frac{\hbar^2 k_3^2 }{2m} = \frac{p_3^2}{2m} \end{aligned}

We are free to use any sort of normalization procedure we wish (periodic boundary conditions, infinite Dirac, …)

## Angular momentum.

HOMEWORK: go through the steps to understand how to formulate $\boldsymbol{\nabla}^2$ in spherical polar coordinates. This is a lot of work, but is good practice and background for dealing with the Hydrogen atom, something with spherical symmetry that is most naturally analyzed in the spherical polar coordinates.

In spherical coordinates (We won’t go through this here, but it is good practice) with

\begin{aligned}x &= r \sin\theta \cos\phi \\ y &= r \sin\theta \sin\phi \\ z &= r \cos\theta\end{aligned}

we have with $u = u(r,\theta, \phi)$

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{r}} \partial_{rr} (r u) + \frac{1}{{r^2 \sin\theta}} \partial_\theta (\sin\theta \partial_\theta u) + \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi} u \right)&= E u\end{aligned}

We see the start of a separation of variables attack with $u = R(r) Y(\theta, \phi)$. We end up with

\begin{aligned}-\frac{\hbar^2}{2m} &\left( \frac{r}{R} (r R')' + \frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y \right) \\ \end{aligned}

\begin{aligned}r (r R')' + \left( \frac{2m E}{\hbar^2} r^2 - \lambda \right) R &= 0\end{aligned}

\begin{aligned}\frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y &= -\lambda\end{aligned}

Application of separation of variables again, with $Y = P(\theta) Q(\phi)$ gives us

\begin{aligned}\frac{1}{{P \sin\theta}} \partial_\theta (\sin\theta \partial_\theta P) + \frac{1}{{Q \sin^2\theta}} \partial_{\phi\phi} Q &= -\lambda \end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) +\lambda \sin^2\theta+ \frac{1}{{Q }} \partial_{\phi\phi} Q &= 0\end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) + \lambda \sin^2\theta - \mu = 0\frac{1}{{Q }} \partial_{\phi\phi} Q &= -\mu\end{aligned}

or

\begin{aligned}\frac{1}{P \sin\theta} \partial_\theta (\sin\theta \partial_\theta P) +\lambda -\frac{\mu}{\sin^2\theta} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

\begin{aligned}\partial_{\phi\phi} Q &= -\mu Q\end{aligned} \hspace{\stretch{1}}(1.2)

The equation for $P$ can be solved using the Legendre function $P_l^m(\cos\theta)$ where $\lambda = l(l+1)$ and $l$ is an integer

Replacing $\mu$ with $m^2$, where $m$ is an integer

\begin{aligned}\frac{d^2 Q}{d\phi^2} &= -m^2 Q\end{aligned}

Imposing a periodic boundary condition $Q(\phi) = Q(\phi + 2\pi)$, where ($m = 0, \pm 1, \pm 2, \cdots$) we have

\begin{aligned}Q &= \frac{1}{{\sqrt{2\pi}}} e^{im\phi}\end{aligned}

There is the overall solution $r(r,\theta,\phi) = R(r) Y(\theta, \phi)$ for a free particle. The functions $Y(\theta, \phi)$ are

\begin{aligned}Y_{lm}(\theta, \phi) &= N \left( \frac{1}{{\sqrt{2\pi}}} e^{im\phi} \right) \underbrace{ P_l^m(\cos\theta) }_{ -l \le m \le l }\end{aligned}

where $N$ is a normalization constant, and $m = 0, \pm 1, \pm 2, \cdots$. $Y_{lm}$ is an eigenstate of the $\mathbf{L}^2$ operator and $L_z$ (two for the price of one). There’s no specific reason for the direction $z$, but it is the direction picked out of convention.

Angular momentum is given by

\begin{aligned}\mathbf{L} = \mathbf{r} \times \mathbf{p}\end{aligned}

where

\begin{aligned}\mathbf{R} = x \hat{\mathbf{x}} + y\hat{\mathbf{y}} + z\hat{\mathbf{z}}\end{aligned}

and

\begin{aligned}\mathbf{p} = p_x \hat{\mathbf{x}} + p_y\hat{\mathbf{y}} + p_z\hat{\mathbf{z}}\end{aligned}

The important thing to remember is that the aim of following all the math is to show that

\begin{aligned}\mathbf{L}^2 Y_{lm} = \hbar^2 l (l+1) Y_{lm}\end{aligned}

and simultaneously

\begin{aligned}\mathbf{L}_z Y_{lm} = \hbar m Y_{lm}\end{aligned}

Part of the solution involves working with $\left[{L_z},{L_{+}}\right]$, and $\left[{L_z},{L_{-}}\right]$, where

\begin{aligned}L_{+} &= L_x + i L_y \\ L_{-} &= L_x - i L_y\end{aligned}

An exercise (not in the book) is to evaluate

\begin{aligned}\left[{L_z},{L_{+}}\right] &= L_z L_x + i L_z L_y - L_x L_z - i L_y L_z \end{aligned} \hspace{\stretch{1}}(1.3)

where

\begin{aligned}\left[{L_x},{L_y}\right] &= i \hbar L_z \\ \left[{L_y},{L_z}\right] &= i \hbar L_x \\ \left[{L_z},{L_x}\right] &= i \hbar L_y\end{aligned} \hspace{\stretch{1}}(1.4)

Substitution back in 1.3 we have

\begin{aligned}\left[{L_z},{L_{+}}\right] &=\left[{L_z},{L_x}\right] + i \left[{L_z},{L_y}\right] \\ &=i \hbar ( L_y - i L_x ) \\ &=\hbar ( i L_y + L_x ) \\ &=\hbar L_{+}\end{aligned}

## Notes and problems for Desai chapter IV.

Posted by peeterjoot on October 12, 2010

[Click here for a PDF of this post with nicer formatting]

# Notes.

Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\begin{itemize}
\item Antisymmetric tensor summation identity.

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab} = \delta_{ja} \delta_{kb} - \delta_{jb}\delta_{ka}\end{aligned} \hspace{\stretch{1}}(1.1)

This is obviously the coordinate equivalent of the dot product of two bivectors

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b) &=( (\mathbf{e}_j \wedge \mathbf{e}_k) \cdot \mathbf{e}_a ) \cdot \mathbf{e}_b) =\delta_{ka}\delta_{jb} - \delta_{ja}\delta_{kb}\end{aligned} \hspace{\stretch{1}}(1.2)

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b)&= \sum_{ie} \left\langle{{\epsilon_{ijk} \mathbf{e}_j \mathbf{e}_k \epsilon_{eab} \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{(\mathbf{e}_i \mathbf{e}_i) \mathbf{e}_j \mathbf{e}_k (\mathbf{e}_e \mathbf{e}_e) \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{\mathbf{e}_i \mathbf{e}_e I^2}}\right\rangle \\ &=-\sum_{ie} \epsilon_{ijk} \epsilon_{eab} \delta_{ie} \\ &=-\sum_i\epsilon_{ijk} \epsilon_{iab}\qquad\square\end{aligned}

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi(L_{-} Y_{lm})^\dagger L_{-} Y_{lm} \sin\theta\end{aligned}

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi L_{-}^{*} Y_{lm}^{*}L_{-} Y_{lm} \sin\theta\end{aligned}

How does he end up with the $L_{-}$ and the $Y_{lm}^{*}$ interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators $L_{\pm}$, where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the $L_{-}$ commutation with $\mathbf{L}^2$ implies this?

\end{itemize}

# Problems

## Problem 1.

### Statement.

Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

### Cartesian case.

For the Cartesian coordinates case we have

\begin{aligned}H = -\frac{\hbar^2}{2m} (\partial_{xx} + \partial_{yy}) = i \hbar \partial_t\end{aligned} \hspace{\stretch{1}}(2.3)

Application of separation of variables with $\Psi = XYT$ gives

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{X''}{X} +\frac{Y''}{Y} \right) = i \hbar \frac{T'}{T} = E .\end{aligned} \hspace{\stretch{1}}(2.4)

Immediately, we have the time dependence

\begin{aligned}T \propto e^{-i E t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.5)

with the PDE reduced to

\begin{aligned}\frac{X''}{X} +\frac{Y''}{Y} = - \frac{2m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

Introducing separate independent constants

\begin{aligned}\frac{X''}{X} &= a^2 \\ \frac{Y''}{Y} &= b^2 \end{aligned} \hspace{\stretch{1}}(2.7)

provides the pre-normalized wave function and the constraints on the constants

\begin{aligned}\Psi &= C e^{ax}e^{by}e^{-iE t/\hbar} \\ a^2 + b^2 &= -\frac{2 m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.9)

### Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

\begin{aligned}e^{ax} &= e^{a(x + \lambda_x)} \\ e^{ay} &= e^{a(y + \lambda_y)} ,\end{aligned} \hspace{\stretch{1}}(2.11)

or

\begin{aligned}a\lambda_x &= 2 \pi i m \\ a\lambda_y &= 2 \pi i n.\end{aligned} \hspace{\stretch{1}}(2.13)

This provides a more explicit form for the energy expression

\begin{aligned}E_{mn} &= \frac{1}{{2m}} 4 \pi^2 \hbar^2 \left( \frac{m^2}{{\lambda_x}^2}+\frac{n^2}{{\lambda_y}^2}\right).\end{aligned} \hspace{\stretch{1}}(2.15)

We can also add in the area normalization using

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{x=0}^{\lambda_x} dx\int_{y=0}^{\lambda_x} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.16)

Our eigenfunctions are now completely specified

\begin{aligned}u_{mn}(x,y,t) &= \frac{1}{{\sqrt{\lambda_x \lambda_y}}}e^{2 \pi i x/\lambda_x}e^{2 \pi i y/\lambda_y}e^{-iE t/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.17)

The interesting thing about this solution is that we can make arbitrary linear combinations

\begin{aligned}f(x,y) = a_{mn} u_{mn}\end{aligned} \hspace{\stretch{1}}(2.18)

and then “solve” for $a_{mn}$, for an arbitrary $f(x,y)$ by taking inner products

\begin{aligned}a_{mn} = \left\langle{{u_mn}} \vert {{f}}\right\rangle =\int_{x=0}^{\lambda_x} dx \int_{y=0}^{\lambda_x} dy f(x,y) u_mn^{*}(x,y).\end{aligned} \hspace{\stretch{1}}(2.19)

This gives the appearance that any function $f(x,y)$ is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions $f(x,y)$, but the equality really means that the RHS will be the periodic extension of $f(x,y)$.

### Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

\begin{aligned}\frac{2 \pi m }{\lambda_x} &= k_x \\ \frac{2 \pi n }{\lambda_y} &= k_y \end{aligned} \hspace{\stretch{1}}(2.20)

Our inner product is now

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^{\infty} dx\int_{\infty}^{\infty} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.22)

And the corresponding normalized wavefunction and associated energy constant $E$ are

\begin{aligned}u_{\mathbf{k}}(x,y,t) &= \frac{1}{{2\pi}}e^{i k_x x}e^{i k_y y}e^{-iE t/\hbar} = \frac{1}{{2\pi}}e^{i \mathbf{k} \cdot \mathbf{x}}e^{-iE t/\hbar} \\ E &= \frac{\hbar^2 \mathbf{k}^2 }{2m}\end{aligned} \hspace{\stretch{1}}(2.23)

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

### Polar case.

In polar coordinates our gradient is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta.\end{aligned} \hspace{\stretch{1}}(2.25)

with

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} .\end{aligned} \hspace{\stretch{1}}(2.26)

Squaring the gradient for the Laplacian we’ll need the partials, which are

\begin{aligned}\partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= -\hat{\mathbf{r}}.\end{aligned}

The Laplacian is therefore

\begin{aligned}\boldsymbol{\nabla}^2 &= (\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \cdot(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \hat{\mathbf{r}} \partial_r \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\mathbf{r}}) \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\boldsymbol{\theta}}) \frac{1}{{r}} \partial_\theta .\end{aligned}

Evalating the derivatives we have

\begin{aligned}\boldsymbol{\nabla}^2 = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{r^2} \partial_{\theta\theta},\end{aligned} \hspace{\stretch{1}}(2.28)

and are now prepared to move on to the solution of the Hamiltonian $H = -(\hbar^2/2m) \boldsymbol{\nabla}^2$. With separation of variables again using $\Psi = R(r) \Theta(\theta) T(t)$ we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{R''}{R} + \frac{R'}{rR} + \frac{1}{{r^2}} \frac{\Theta''}{\Theta} \right) = i \hbar \frac{T'}{T} = E.\end{aligned} \hspace{\stretch{1}}(2.29)

Rearranging to separate the $\Theta$ term we have

\begin{aligned}\frac{r^2 R''}{R} + \frac{r R'}{R} + \frac{2 m E}{\hbar^2} r^2 E = -\frac{\Theta''}{\Theta} = \lambda^2.\end{aligned} \hspace{\stretch{1}}(2.30)

The angular solutions are given by

\begin{aligned}\Theta = \frac{1}{{\sqrt{2\pi}}} e^{i \lambda \theta}\end{aligned} \hspace{\stretch{1}}(2.31)

Where the normalization is given by

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{0}^{2 \pi} d\theta \psi^{*}(\theta) \phi(\theta).\end{aligned} \hspace{\stretch{1}}(2.32)

And the radial by the solution of the PDE

\begin{aligned}r^2 R'' + r R' + \left( \frac{2 m E}{\hbar^2} r^2 E - \lambda^2 \right) R = 0\end{aligned} \hspace{\stretch{1}}(2.33)

## Problem 2.

### Statement.

Use the orthogonality property of $P_l(\cos\theta)$

\begin{aligned}\int_{-1}^1 dx P_l(x) P_{l'}(x) = \frac{2}{2l+1} \delta_{l l'},\end{aligned} \hspace{\stretch{1}}(2.34)

confirm that at least the first two terms of (4.171)

\begin{aligned}e^{i k r \cos\theta} = \sum_{l=0}^\infty (2l + 1) i^l j_l(kr) P_l(\cos\theta)\end{aligned} \hspace{\stretch{1}}(2.35)

are correct.

### Solution.

Taking the inner product using the integral of 2.34 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} P_l'(x) = 2 i^l j_l(kr) \end{aligned} \hspace{\stretch{1}}(2.36)

To confirm the first two terms we need

\begin{aligned}P_0(x) &= 1 \\ P_1(x) &= x \\ j_0(\rho) &= \frac{\sin\rho}{\rho} \\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} - \frac{\cos\rho}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.37)

On the LHS for $l'=0$ we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = 2 \frac{\sin{kr}}{kr}\end{aligned} \hspace{\stretch{1}}(2.41)

On the LHS for $l'=1$ note that

\begin{aligned}\int dx x e^{i k r x} &= \int dx x \frac{d}{dx} \frac{e^{i k r x}}{ikr} \\ &= x \frac{e^{i k r x}}{ikr} - \frac{e^{i k r x}}{(ikr)^2}.\end{aligned}

So, integration in $[-1,1]$ gives us

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = -2i \frac{\cos{kr}}{kr} + 2i \frac{1}{{(kr)^2}} \sin{kr}.\end{aligned} \hspace{\stretch{1}}(2.42)

Now compare to the RHS for $l'=0$, which is

\begin{aligned}2 j_0(kr) = 2 \frac{\sin{kr}}{kr},\end{aligned} \hspace{\stretch{1}}(2.43)

which matches 2.41. For $l'=1$ we have

\begin{aligned}2 i j_1(kr) = 2i \frac{1}{{kr}} \left( \frac{\sin{kr}}{kr} - \cos{kr} \right),\end{aligned} \hspace{\stretch{1}}(2.44)

which in turn matches 2.42, completing the exersize.

## Problem 3.

### Statement.

Obtain the commutation relations $\left[{L_i},{L_j}\right]$ by calculating the vector $\mathbf{L} \times \mathbf{L}$ using the definition $\mathbf{L} = \mathbf{r} \times \mathbf{p}$ directly instead of introducing a differential operator.

### Solution.

Expressing the product $\mathbf{L} \times \mathbf{L}$ in determinant form sheds some light on this question. That is

\begin{aligned}\begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ L_1 & L_2 & L_3 \\ L_1 & L_2 & L_3\end{vmatrix}&= \mathbf{e}_1 \left[{L_2},{L_3}\right] +\mathbf{e}_2 \left[{L_3},{L_1}\right] +\mathbf{e}_3 \left[{L_1},{L_2}\right]= \mathbf{e}_i \epsilon_{ijk} \left[{L_j},{L_k}\right]\end{aligned} \hspace{\stretch{1}}(2.45)

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using $L_i = \epsilon_{ijk} r_j p_k$ like so

\begin{aligned}\left[{L_i},{L_j}\right]&=\epsilon_{imn} r_m p_n \epsilon_{jab} r_a p_b- \epsilon_{jab} r_a p_b \epsilon_{imn} r_m p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (p_n r_a) p_b- \epsilon_{jab} \epsilon_{imn} r_a (p_b r_m) p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (r_a p_n -i \hbar \delta_{an}) p_b- \epsilon_{jab} \epsilon_{imn} r_a (r_m p_b - i \hbar \delta{mb}) p_n \\ &=\epsilon_{imn} \epsilon_{jab} (r_m r_a p_n p_b - r_a r_m p_b p_n )- i \hbar ( \epsilon_{imn} \epsilon_{jnb} r_m p_b - \epsilon_{jam} \epsilon_{imn} r_a p_n ).\end{aligned}

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

\begin{aligned}\left[{L_i},{L_j}\right]&=i \hbar ( \epsilon_{nim} \epsilon_{njb} r_m p_b - \epsilon_{mja} \epsilon_{min} r_a p_n ) \\ &=i \hbar ( (\delta_{ij} \delta_{mb} -\delta_{ib} \delta_{mj}) r_m p_b - (\delta_{ji} \delta_{an} -\delta_{jn} \delta_{ai}) r_a p_n ) \\ &=i \hbar (\delta_{ij} \delta_{mb} r_m p_b - \delta_{ji} \delta_{an} r_a p_n - \delta_{ib} \delta_{mj} r_m p_b + \delta_{jn} \delta_{ai} r_a p_n ) \\ &=i \hbar (\delta_{ij} r_m p_m- \delta_{ji} r_a p_a- r_j p_i+ r_i p_j ) \\ \end{aligned}

For $k \ne i,j$, this is $i\hbar (\mathbf{r} \times \mathbf{p})_k$, so we can write

\begin{aligned}\mathbf{L} \times \mathbf{L} &= i\hbar \mathbf{e}_k \epsilon_{kij} ( r_i p_j - r_j p_i ) = i\hbar \mathbf{L} = i\hbar \mathbf{e}_k L_k = i\hbar \mathbf{L}.\end{aligned} \hspace{\stretch{1}}(2.46)

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

\begin{aligned}\left[{L_i},{L_j}\right] &= i \hbar \epsilon_{ijk} L_k\end{aligned} \hspace{\stretch{1}}(2.47)

as here in Desai. Both say the same thing.

TODO.

## Problem 5.

### Statement.

A free particle is moving along a path of radius $R$. Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.

### Solution.

In classical mechanics our Lagrangian for this system is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m R^2 \dot{\theta}^2,\end{aligned} \hspace{\stretch{1}}(2.48)

with the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m R^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(2.49)

Thus the classical Hamiltonian is

\begin{aligned}H = \frac{1}{{2m R^2}} {p_\theta}^2.\end{aligned} \hspace{\stretch{1}}(2.50)

By analogy the QM Hamiltonian operator will therefore be

\begin{aligned}H = -\frac{\hbar^2}{2m R^2} \partial_{\theta\theta}.\end{aligned} \hspace{\stretch{1}}(2.51)

For $\Psi = \Theta(\theta) T(t)$, separation of variables gives us

\begin{aligned}-\frac{\hbar^2}{2m R^2} \frac{\Theta''}{\Theta} = i \hbar \frac{T'}{T} = E,\end{aligned} \hspace{\stretch{1}}(2.52)

from which we have

\begin{aligned}T &\propto e^{-i E t/\hbar} \\ \Theta &\propto e^{ \pm i \sqrt{2m E} R \theta/\hbar }.\end{aligned} \hspace{\stretch{1}}(2.53)

Requiring single valued $\Theta$, equal at any multiples of $2\pi$, we have

\begin{aligned}e^{ \pm i \sqrt{2m E} R (\theta + 2\pi)/\hbar } = e^{ \pm i \sqrt{2m E} R \theta/\hbar },\end{aligned}

or

\begin{aligned}\pm \sqrt{2m E} \frac{R}{\hbar} 2\pi = 2 \pi n,\end{aligned}

Suffixing the energy values with this index we have

\begin{aligned}E_n = \frac{n^2 \hbar^2}{2 m R^2}.\end{aligned} \hspace{\stretch{1}}(2.55)

Allowing both positive and negative integer values for $n$ we have

\begin{aligned}\Psi = \frac{1}{{\sqrt{2\pi}}} e^{i n \theta} e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.56)

where the normalization was a result of the use of a $[0,2\pi]$ inner product over the angles

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle \equiv \int_0^{2\pi} \psi^{*}(\theta) \phi(\theta) d\theta.\end{aligned} \hspace{\stretch{1}}(2.57)

## Problem 6.

### Statement.

Determine $\left[{L_i},{r}\right]$ and $\left[{L_i},{\mathbf{r}}\right]$.

### Solution.

Since $L_i$ contain only $\theta$ and $\phi$ partials, $\left[{L_i},{r}\right] = 0$. For the position vector, however, we have an angular dependence, and are left to evaluate $\left[{L_i},{\mathbf{r}}\right] = r \left[{L_i},{\hat{\mathbf{r}}}\right]$. We’ll need the partials for $\hat{\mathbf{r}}$. We have

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \phi} \\ I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(2.58)

Evaluating the partials we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} = \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}}\end{aligned}

With

\begin{aligned}\hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R \\ \hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R\end{aligned} \hspace{\stretch{1}}(2.61)

where $\tilde{R} R = 1$, and $\hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \hat{\mathbf{r}} = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 R = \tilde{R} \mathbf{e}_1 R = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.64)

For the $\phi$ partial we have

\begin{aligned}\partial_\phi \hat{\mathbf{r}}&= \mathbf{e}_3 \sin\theta I \hat{\boldsymbol{\phi}} \mathbf{e}_1 \mathbf{e}_2 \\ &= \sin\theta \hat{\boldsymbol{\phi}}\end{aligned}

We are now prepared to evaluate the commutators. Starting with the easiest we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right] \Psi&=-i \hbar (\partial_\phi \hat{\mathbf{r}} \Psi - \hat{\mathbf{r}} \partial_\phi \Psi ) \\ &=-i \hbar (\partial_\phi \hat{\mathbf{r}}) \Psi \\ \end{aligned}

So we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right]&=-i \hbar \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.65)

Observe that by virtue of chain rule, only the action of the partials on $\hat{\mathbf{r}}$ itself contributes, and all the partials applied to $\Psi$ cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of $L_x$, and $L_y$ are

\begin{aligned}L_x &= -i \hbar (-S_\phi \partial_\theta - C_\phi \cot\theta \partial_\phi) \\ L_y &= -i \hbar (C_\phi \partial_\theta - S_\phi \cot\theta \partial_\phi),\end{aligned} \hspace{\stretch{1}}(2.66)

where the sines and cosines are written with $S$, and $C$ respectively for short.

We therefore have

\begin{aligned}\left[{L_x},{\hat{\mathbf{r}}}\right]&= -i \hbar (-S_\phi (\partial_\theta \hat{\mathbf{r}}) - C_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}}) ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi \cot\theta S_\theta \hat{\boldsymbol{\phi}} ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi C_\theta \hat{\boldsymbol{\phi}} ) \\ \end{aligned}

and

\begin{aligned}\left[{L_y},{\hat{\mathbf{r}}}\right]&= -i \hbar (C_\phi (\partial_\theta \hat{\mathbf{r}}) - S_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}})) \\ &= -i \hbar (C_\phi \hat{\boldsymbol{\theta}} - S_\phi C_\theta \hat{\boldsymbol{\phi}} ).\end{aligned}

Adding back in the factor of $r$, and summarizing we have

\begin{aligned}\left[{L_i},{r}\right] &= 0 \\ \left[{L_x},{\mathbf{r}}\right] &= -i \hbar r (-\sin\phi \hat{\boldsymbol{\theta}} - \cos\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_y},{\mathbf{r}}\right] &= -i \hbar r (\cos\phi \hat{\boldsymbol{\theta}} - \sin\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_z},{\mathbf{r}}\right] &= -i \hbar r \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.68)

## Problem 7.

### Statement.

Show that

\begin{aligned}e^{-i\pi L_x /\hbar } {\lvert {l,m} \rangle} = {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.72)

TODO.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

## Derivation of the spherical polar Laplacian

Posted by peeterjoot on October 9, 2010

[Click here for a PDF of this post with nicer formatting]

# Motivation.

In [1] was a Geometric Algebra derivation of the 2D polar Laplacian by squaring the quadient. In [2] was a factorization of the spherical polar unit vectors in a tidy compact form. Here both these ideas are utilized to derive the spherical polar form for the Laplacian, an operation that is strictly algebraic (squaring the gradient) provided we operate on the unit vectors correctly.

# Our rotation multivector.

Our starting point is a pair of rotations. We rotate first in the $x,y$ plane by $\phi$

\begin{aligned}\mathbf{x} &\rightarrow \mathbf{x}' = \tilde{R_\phi} \mathbf{x} R_\phi \\ i &\equiv \mathbf{e}_1 \mathbf{e}_2 \\ R_\phi &= e^{i \phi/2}\end{aligned} \hspace{\stretch{1}}(2.1)

Then apply a rotation in the $\mathbf{e}_3 \wedge (\tilde{R_\phi} \mathbf{e}_1 R_\phi) = \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi$ plane

\begin{aligned}\mathbf{x}' &\rightarrow \mathbf{x}'' = \tilde{R_\theta} \mathbf{x}' R_\theta \\ R_\theta &= e^{ \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi \theta/2 } = \tilde{R_\phi} e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } R_\phi\end{aligned} \hspace{\stretch{1}}(2.4)

The composition of rotations now gives us

\begin{aligned}\mathbf{x}&\rightarrow \mathbf{x}'' = \tilde{R_\theta} \tilde{R_\phi} \mathbf{x} R_\phi R_\theta = \tilde{R} \mathbf{x} R \\ R &= R_\phi R_\theta = e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 }.\end{aligned}

# Expressions for the unit vectors.

The unit vectors in the rotated frame can now be calculated. With $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$ we can calculate

\begin{aligned}\hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R \\ \hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R\end{aligned} \hspace{\stretch{1}}(3.6)

Performing these we get

\begin{aligned}\hat{\boldsymbol{\phi}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_2 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_2 e^{ i \phi },\end{aligned}

and

\begin{aligned}\hat{\mathbf{r}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_3 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } (\mathbf{e}_3 \cos\theta + \mathbf{e}_1 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_3 \cos\theta +\mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= \mathbf{e}_3 (\cos\theta + \mathbf{e}_3 \mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } ) \\ &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta},\end{aligned}

and

\begin{aligned}\hat{\boldsymbol{\theta}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_1 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } ( \mathbf{e}_1 \cos\theta - \mathbf{e}_3 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_1 \cos\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} \cos\theta - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} (\cos\theta + \hat{\boldsymbol{\phi}} i \mathbf{e}_3 \sin\theta ) \\ &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned}

Summarizing these are

\begin{aligned}\hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{ i \phi } \\ \hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\theta}} &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned} \hspace{\stretch{1}}(3.9)

# Derivatives of the unit vectors.

We’ll need the partials. Most of these can be computed from 3.9 by inspection, and are

\begin{aligned}\partial_r \hat{\boldsymbol{\phi}} &= 0 \\ \partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\boldsymbol{\phi}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \\ \partial_\phi \hat{\boldsymbol{\phi}} &= \hat{\boldsymbol{\phi}} i \\ \partial_\phi \hat{\mathbf{r}} &= \hat{\boldsymbol{\phi}} \sin\theta \\ \partial_\phi \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} \cos\theta\end{aligned} \hspace{\stretch{1}}(4.12)

# Expanding the Laplacian.

We note that the line element is $ds = dr + r d\theta + r\sin\theta d\phi$, so our gradient in spherical coordinates is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi.\end{aligned} \hspace{\stretch{1}}(5.21)

We can now evaluate the Laplacian

\begin{aligned}\boldsymbol{\nabla}^2 &=\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right).\end{aligned} \hspace{\stretch{1}}(5.22)

Evaluating these one set at a time we have

\begin{aligned}\hat{\mathbf{r}} \partial_r \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) &= \partial_{rr},\end{aligned}

and

\begin{aligned}\frac{1}{{r}} \hat{\boldsymbol{\theta}} \partial_\theta \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right)&=\frac{1}{{r}} \left\langle{{\hat{\boldsymbol{\theta}} \left(\hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \partial_r + \hat{\mathbf{r}} \partial_{\theta r}+ \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{1}{{r}} \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \partial_\theta+ \hat{\boldsymbol{\phi}} \partial_\theta \frac{1}{{r\sin\theta}} \partial_\phi\right)}}\right\rangle \\ &= \frac{1}{{r}} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta},\end{aligned}

and

\begin{aligned}\frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi &\cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \\ &=\frac{1}{r\sin\theta} \left\langle{{\hat{\boldsymbol{\phi}}\left(\hat{\boldsymbol{\phi}} \sin\theta \partial_r + \hat{\mathbf{r}} \partial_{\phi r} + \hat{\boldsymbol{\phi}} \cos\theta \frac{1}{r} \partial_\theta + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\phi \theta }+ \hat{\boldsymbol{\phi}} i \frac{1}{r\sin\theta} \partial_\phi + \hat{\boldsymbol{\phi}} \frac{1}{r\sin\theta} \partial_{\phi \phi }\right)}}\right\rangle \\ &=\frac{1}{{r}} \partial_r+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned}

Summing these we have

\begin{aligned}\boldsymbol{\nabla}^2 &=\partial_{rr}+ \frac{2}{r} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta}+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned} \hspace{\stretch{1}}(5.23)

This is often written with a chain rule trick to considate the $r$ and $\theta$ partials

\begin{aligned}\boldsymbol{\nabla}^2 \Psi &=\frac{1}{{r}} \partial_{rr} (r \Psi)+ \frac{1}{{r^2 \sin\theta}} \partial_\theta \left( \sin\theta \partial_\theta \Psi \right)+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\psi\psi} \Psi\end{aligned} \hspace{\stretch{1}}(5.24)

It’s simple to verify that this is identical to 5.23.

# References

[1] Peeter Joot. Polar form for the gradient and Laplacian. [online]. http://sites.google.com/site/peeterjoot/math2009/polarGradAndLaplacian.pdf.

[2] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf .

## Desai Chapter II notes and problems.

Posted by peeterjoot on October 9, 2010

[Click here for a PDF of this post with nicer formatting]

# Motivation.

Chapter II notes for [1].

# Notes

## Canonical Commutator

Based on the canonical relationship $[X,P] = i\hbar$, and $\left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x'-x)$, Desai determines the form of the $P$ operator in continuous space. A consequence of this is that the matrix element of the momentum operator is found to have a delta function specification

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right).\end{aligned}

In particular the matrix element associated with the state ${\lvert {\phi} \rangle}$ is found to be

\begin{aligned}{\langle {x'} \rvert} P {\lvert {\phi} \rangle} = -i \hbar \frac{d}{dx'} \phi(x').\end{aligned}

Compare this to [2], where this last is taken as the definition of the momentum operator, and the relationship to the delta function is not spelled out explicitly. This canonical commutator approach, while more abstract, seems to have less black magic involved in the setup. We do require the commutator relationship $[X,P] = i\hbar$ to be pulled out of a magic hat, but at least the magic show is a structured one based on a small set of core assumptions.

It will likely be good to come back to this later when trying to reconcile this new (for me) Dirac notation with the more basic notation I’m already comfortable with. When trying to compare the two, it will be good to note that there is a matrix element that is implied in the more old fashioned treatment in a book such as [3].

There is one fundamental assumption that appears to be made in this section that isn’t justified by anything except the end result. That is the assumption that $P$ is a derivative like operator, acting with a product rule action. That’s used to obtain (2.28) and is a fairly black magic operation. This same assumption, is also hiding, somewhat sneakily, in the manipulation for (2.44).

If one has to make that assumption that $P$ is a derivative like operator, I don’t feel this method of introducing it is any less arbitrary seeming. It is still pulled out of a magic hat, only because the answer is known ahead of time. The approach of [3], where the derivative nature is presented as consequence of transforming (via Fourier transforms) from the position to the momentum representation, seems much more intuitive and less arbitrary.

## Generalized momentum commutator.

It is stated that

\begin{aligned}[P,X^n] = - n i \hbar X^{n-1}.\end{aligned}

Let’s prove this. The $n=1$ case is the canonical commutator, which is assumed. Is there any good way to justify that from first principles, as presented in the text? We have to prove this for $n$, given the relationship for $n-1$. Expanding the $n$th power commutator we have

\begin{aligned}[P,X^n] &= P X^n - X^n P \\ &= P X^{n-1} X - X^{n } P \\ \end{aligned}

Rearranging the $n-1$ result we have

\begin{aligned}P X^{n-1} = X^{n-1} P - (n-1) i \hbar X^{n-2},\end{aligned}

and can insert that in our $[P,X^n]$ expansion for

\begin{aligned}[P,X^n] &= \left( X^{n-1} P - (n-1) i \hbar X^{n-2} \right)X - X^{n } P \\ &= X^{n-1} (PX) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= X^{n-1} ( X P - i\hbar) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= -X^{n-1} i\hbar - (n-1) i \hbar X^{n-1} \\ &= -n i \hbar X^{n-1} \qquad\square\end{aligned}

## Uncertainty principle.

The origin of the statement $[\Delta A, \Delta B] = [A, B]$ is not something that seemed obvious. Expanding this out however is straightforward, and clarifies things. That is

\begin{aligned}[\Delta A, \Delta B] &= (A - \left\langle{{A}}\right\rangle) (B - \left\langle{{B}}\right\rangle) - (B - \left\langle{{B}}\right\rangle) (A - \left\langle{{A}}\right\rangle) \\ &= \left( A B - \left\langle{{A}}\right\rangle B - \left\langle{{B}}\right\rangle A +\left\langle{{A}}\right\rangle \left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangle A - \left\langle{{A}}\right\rangle B +\left\langle{{B}}\right\rangle \left\langle{{A}}\right\rangle \right) \\ &= A B - B A \\ &= [A, B]\qquad\square\end{aligned}

## Size of a particle

I found it curious that using $\Delta x \Delta p \approx \hbar$ instead of $\Delta x \Delta p \ge \hbar/2$, was sufficient to obtain the hydrogen ground state energy $E_{\text{min}} = -e^2/2 a_0$, without also having to do any factor of two fudging.

## Space displacement operator.

### Initial notes.

I’d be curious to know if others find the loose use of equality for approximation after approximation slightly disturbing too?

I also find it curious that (2.140) is written

\begin{aligned}D(x) = \exp\left( -i \frac{P}{\hbar} x \right),\end{aligned}

and not

\begin{aligned}D(x) = \exp\left( -i x \frac{P}{\hbar} \right).\end{aligned}

Is this intentional? It doesn’t seem like $P$ ought to be acting on $x$ in this case, so why order the terms that way?

Expanding the application of this operator, or at least its first order Taylor series, is helpful to get an idea about this. Doing so, with the original $\Delta x'$ value used in the derivation of the text we have to start

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle} &\approx \left(1 - i \frac{P}{\hbar} \Delta x' \right) {\lvert {\phi} \rangle} \\ &= \left(1 - i \left( -i \hbar \delta(x -x') \frac{\partial}{\partial x} \right) \frac{1}{{\hbar}} \Delta x'\right) {\lvert {\phi} \rangle} \\ \end{aligned}

This shows that the $\Delta x$ factor can be commuted with the momentum operator, as it is not a function of $x'$, so the question of $P x$, vs $x P$ above appears to be a non-issue.

Regardless of that conclusion, it seems worthy to continue an attempt at expanding this shift operator action on the state vector. Let’s do so, but do so by computing the matrix element ${\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}$. That is

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle} &\approx\left\langle{{x'}} \vert {{\phi}}\right\rangle - {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {\phi} \rangle} \\ &=\phi(x') - \int {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \int \delta(x -x') \frac{\partial}{\partial x} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \left\langle{{x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \phi(x') \\ \end{aligned}

This is consistent with the text. It is interesting, and initially surprising that the space displacement operator when applied to a state vector introduces a negative shift in the wave function associated with that state vector. In the derivation of the text, this was associated with the use of integration by parts (ie: due to the sign change in that integration). Here we see it sneak back in, due to the $i^2$ once the momentum operator is expanded completely.

As last note and question. The first order Taylor approximation of the momentum operator was used. If the higher order terms are retained, as in

\begin{aligned}\exp\left( -i \Delta x' \frac{P}{\hbar} \right) = 1 - \Delta x' \delta(x -x') \frac{\partial}{\partial x} + \frac{1}{{2}} \left( - \Delta x' \delta(x -x') \frac{\partial}{\partial x} \right)^2 + \cdots,\end{aligned}

then how does one evaluate a squared delta function (or Nth power)?

Talked to Vatche about this after class. The key to this is sequential evaluation. Considering the simple case for $P^2$, we evaluate one operator at a time, and never actually square the delta function

\begin{aligned}{\langle {x'} \rvert} P^2 {\lvert {\phi} \rangle} \end{aligned}

I was also questioned why I was including the delta function at this point. Why would I do that. Thinking further on this, I see that isn’t a reasonable thing to do. That delta function only comes into the mix when one takes the matrix element of the momentum operator as in

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x-x') \frac{d}{dx'}. \end{aligned}

This is very much like the fact that the delta function only shows up in the continuous representation in other context where one has matrix elements. The most simple example of which is just

\begin{aligned}\left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x-x').\end{aligned}

I also see now that the momentum operator is directly identified with the derivative (no delta function) in two other places in the text. These are equations (2.32) and (2.46) respectively:

\begin{aligned}P(x) &= -i \hbar \frac{d}{dx} \\ P &= -i \hbar \frac{d}{dX}.\end{aligned}

In the first, (2.32), I thought the $P(x)$ was somehow different, just a helpful expression found along the way, but now it occurs to me that this was intended to be an unambiguous representation of the momentum operator itself.

### A second try.

Getting a feel for this Dirac notation takes a bit of adjustment. Let’s try evaluating the matrix element for the space displacement operator again, without abusing the notation, or thinking that we have a requirement for squared delta functions and other weirdness. We start with

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle}&=e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {\phi} \rangle} \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}\left\langle{{x}} \vert {{\phi}}\right\rangle \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle} \phi(x).\end{aligned}

Now, to evaluate $e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}$, we can expand in series

\begin{aligned}e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}&={\lvert {x} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k P^k {\lvert {x} \rangle}.\end{aligned}

It is tempting to left multiply by ${\langle {x'} \rvert}$ and commute that past the $P^k$, then write $P^k = -i \hbar d/dx$. That probably produces the correct result, but is abusive of the notation. We can still left multiply by ${\langle {x'} \rvert}$, but to be proper, I think we have to leave that on the left of the $P^k$ operator. This yields

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\int dx \left( \left\langle{{x'}} \vert {{x}}\right\rangle + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k {\langle {x'} \rvert} P^k {\lvert {x} \rangle}\right) \phi(x) \\ &=\int dx \delta(x'- x) \phi(x)+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k \int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x).\end{aligned}

The first integral is just $\phi(x')$, and we are left with integrating the higher power momentum matrix elements, applied to the wave function $\phi(x)$. We can proceed iteratively to expand those integrals

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} {\langle {x''} \rvert} P {\lvert {x} \rangle} \phi(x) \\ \end{aligned}

Now we have a matrix element that we know what to do with. Namely, ${\langle {x''} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x''-x) {\partial {}}/{\partial {x}}$, which yields

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= -i \hbar \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} \delta(x''-x) \frac{\partial {}}{\partial {x}} \phi(x) \\ &= -i \hbar \int dx {\langle {x'} \rvert} P^{k-1} {\lvert {x} \rangle} \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

Each similar application of the identity operator brings down another $-i\hbar$ and derivative yielding

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k}.\end{aligned}

Going back to our displacement operator matrix element, we now have

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\phi(x')+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k} \\ &=\phi(x') +\sum_{k=1}^\infty \frac{1}{{k!}} \left( - \Delta x' \frac{\partial }{\partial x'} \right)^k \phi(x') \\ &= \phi(x' - \Delta x').\end{aligned}

This shows nicely why the sign goes negative and it is no longer surprising when one observes that this can be obtained directly by using the adjoint relationship

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=(D^\dagger(\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &=(D(-\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &={\lvert {x' - \Delta x'} \rangle}^\dagger {\lvert {\phi} \rangle} \\ &=\left\langle{{x' - \Delta x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x' - \Delta x')\end{aligned}

That’s a whole lot easier than the integral manipulation, but at least shows that we now have a feel for the notation, and have confirmed the exponential formulation of the operator nicely.

## Time evolution operator

The phrase “we identify time evolution with the Hamiltonian”. What a magic hat maneuver! Is there a way that this would be logical without already knowing the answer?

## Dispersion delta function representation.

The Principle part notation here I found a bit unclear. He writes

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{(x'-x)}{(x'-x)^2 + \epsilon^2}= P\left( \frac{1}{{x' - x}} \right).\end{aligned}

In complex variables the principle part is the negative power series terms. For example for $f(z) = \sum a_k z^k$, the principle part is

\begin{aligned}\sum_{k = -\infty}^{-1} a_k z^k\end{aligned}

This doesn’t vanish at $z = 0$ as the principle part in this section is stated to. In (2.202) he pulls the $P$ out of the integral, but I think the intention is really to keep this associated with the $1/(x'-x)$, as in

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\pi}} \int_0^\infty dx' \frac{f(x')}{x'-x - i \epsilon}= \frac{1}{{\pi}} \int_0^\infty dx' f(x') P\left( \frac{1}{{x' - x}} \right) + i f(x)\end{aligned}

Will this even have any relevance in this text?

# Problems.

## 1. Cauchy-Schwartz identity.

We wish to find the value of $\lambda$ that is just right to come up with the desired identity. The starting point is the expansion of the inner product

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

There is a trial and error approach to this problem, where one magically picks $\lambda \propto \left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle^n$, and figures out the proportionality constant and scale factor for the denominator to do the job. A nicer way is to set up the problem as an extreme value exercise. We can write this inner product as a function of $\lambda$, and proceed with setting the derivative equal to zero

\begin{aligned}f(\lambda) =\left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

Its derivative is

\begin{aligned}\frac{df}{d\lambda} &=\left(\lambda^{*} + \lambda \frac{d\lambda^{*}}{d\lambda}\right) \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle + \frac{d\lambda^{*}}{d\lambda} \left\langle{{b}} \vert {{a}}\right\rangle \\ &=\lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle +\frac{d\lambda^{*}}{d\lambda} \Bigl( \lambda \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{b}} \vert {{a}}\right\rangle \Bigr)\end{aligned}

Now, we have a bit of a problem with $d\lambda^{*}/d\lambda$, since that doesn’t actually exist. However, that problem can be side stepped if we insist that the factor that multiplies it is zero. That provides a value for $\lambda$ that also kills of the remainder of $df/d\lambda$. That value is

\begin{aligned}\lambda = - \frac{\left\langle{{b}} \vert {{a}}\right\rangle }{ \left\langle{{b}} \vert {{b}}\right\rangle }.\end{aligned}

Back substitution yields

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle - \left\langle{{a}} \vert {{b}}\right\rangle\left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle \ge 0.\end{aligned}

This is easily rearranged to obtain the desired result:

\begin{aligned}\left\langle{{a}} \vert {{a}}\right\rangle \left\langle{{b}} \vert {{b}}\right\rangle \ge \left\langle{{b}} \vert {{a}}\right\rangle\left\langle{{a}} \vert {{b}}\right\rangle.\end{aligned}

## 2. Uncertainty relation.

### The problem.

Using the Schwarz inequality of problem 1, and a symmetric and antisymmetric (anticommutator and commutator) sum of products that

\begin{aligned}{\left\lvert{\Delta A \Delta B}\right\rvert}^2 \ge \frac{1}{{4}}{\left\lvert{ \left[{A},{B}\right]}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.1)

and that this result implies

\begin{aligned}\Delta x \Delta p \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.2)

### The solution.

This problem seems somewhat misleading, since the Schwarz inequality appears to have nothing to do with showing 3.1, but only with the split of the operator product into symmetric and antisymmetric parts. Another possible tricky thing about this problem is that there is no mention of the anticommutator in the text at this point that I can find, so if one does not know what it is defined as, it must be figured out by context.

I’ve also had an interpretation problem with this since $\Delta x \Delta p$ in 3.2 cannot mean the operators as is the case of 3.1. My assumption is that in 3.2 these deltas are really absolute expectation values, and that we really want to show

\begin{aligned}{\left\lvert{\left\langle{{\Delta X}}\right\rangle}\right\rvert} {\left\lvert{\left\langle{{\Delta P}}\right\rangle}\right\rvert} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.3)

However, I’m unable to demonstrate this. Instead I’m able to show two things:

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4} \\ {\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}\end{aligned}

Is one of these the result to be shown? Note that only the first of these required the Schwarz inequality. Also, it seems strange that we want the expectation of the operator $\Delta X\Delta P$?

Starting with the first part of the problem, note that we can factor any operator product into a linear combination of two Hermitian operators using the commutator and anticommutator. That is

\begin{aligned}C D &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2}}\left( C D - D C\right) \\ &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2i}}\left( C D - D C\right) i \\ &\equiv \frac{1}{{2}}\left\{{C},{D}\right\}+\frac{1}{{2i}} \left[{C},{D}\right] i\end{aligned}

For Hermitian operators $C$, and $D$, using $(CD)^\dagger = D^\dagger C^\dagger = D C$, we can show that the two operator factors are Hermitian,

\begin{aligned}\left(\frac{1}{{2}}\left\{{C},{D}\right\}\right)^\dagger&= \frac{1}{{2}}\left( C D + D C\right)^\dagger \\ &= \frac{1}{{2}}\left( D^\dagger C^\dagger + C^\dagger D^\dagger\right) \\ &= \frac{1}{{2}}\left( D C + C D \right) \\ &= \frac{1}{{2}}\left\{{C},{D}\right\},\end{aligned}

\begin{aligned}\left(\frac{1}{{2}}\left[{C},{D}\right] i\right)^\dagger&= -\frac{i}{2} \left( C D - D C\right)^\dagger \\ &= -\frac{i}{2}\left( D^\dagger C^\dagger - C^\dagger D^\dagger\right) \\ &= -\frac{i}{2}\left( D C - C D \right) \\ &=\frac{1}{{2}}\left[{C},{D}\right] i\end{aligned}

So for the absolute squared value of the expectation of product of two operators we have

\begin{aligned}\left\langle{{C D }}\right\rangle^2&={\left\lvert{\left\langle{{\frac{1}{{2}}\left\{{C},{D}\right\} +\frac{1}{{2i}} \left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &={\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2.\end{aligned}

Now, these expectation values are real, given the fact that these operators are Hermitian. Suppose we write $a = \left\langle{{\left\{{C},{D}\right\}}}\right\rangle/2$, and $b = \left\langle{{\left[{C},{D}\right]i}}\right\rangle/2$, then we have

\begin{aligned}{\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2&={\left\lvert{ a - b i }\right\rvert}^2 \\ &=( a - b i ) ( a + b i ) \\ &=a^2 + b^2\end{aligned}

So we have for the squared expectation value of the operator product $C D$

\begin{aligned}\left\langle{{C D }}\right\rangle^2 &=\frac{1}{{4}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle^2 +\frac{1}{{4}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2 \\ &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

With $C = \Delta A$, and $D = \Delta B$, this almost completes the first part of the problem. The remaining thing to note is that $\left[{\Delta A},{\Delta B}\right] = \left[{A},{B}\right]$. This last is straight forward to show

\begin{aligned}\left[{\Delta A},{\Delta B}\right] &=\left[{A - \left\langle{{A}}\right\rangle},{B - \left\langle{{B}}\right\rangle}\right] \\ &=(A - \left\langle{{A}}\right\rangle)(B - \left\langle{{B}}\right\rangle)-(B - \left\langle{{B}}\right\rangle)(A - \left\langle{{A}}\right\rangle) \\ &=\left( A B - \left\langle{{A}}\right\rangleB- \left\langle{{B}}\right\rangleA+ \left\langle{{A}}\right\rangle\left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangleA- \left\langle{{A}}\right\rangleB+ \left\langle{{B}}\right\rangle\left\langle{{A}}\right\rangle \right) \\ &=A B - B A \\ &=\left[{A},{B}\right].\end{aligned}

Putting the pieces together we have

\begin{aligned}\left\langle{{\Delta A \Delta B }}\right\rangle^2 &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{A},{B}\right]}}\right\rangle}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.4)

With expectation value implied by the absolute squared, this reproduces relation 3.1 as desired.

For the remaining part of the problem, with ${\lvert {\alpha} \rangle} = \Delta A {\lvert {\psi} \rangle}$, and ${\lvert {\beta} \rangle} = \Delta B {\lvert {\psi} \rangle}$, and noting that $(\Delta A)^\dagger = \Delta A$ for Hermitian operator $A$ (or $B$ too in this case), the Schwartz inequality

\begin{aligned}\left\langle{{\alpha}} \vert {{\alpha}}\right\rangle\left\langle{{\beta}} \vert {{\beta}}\right\rangle &\ge {\left\lvert{\left\langle{{\beta}} \vert {{\alpha}}\right\rangle}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.5)

takes the following form

\begin{aligned}{\langle {\psi} \rvert}(\Delta A)^\dagger \Delta A {\lvert {\psi} \rangle} {\langle {\psi} \rvert}(\Delta B)^\dagger B {\lvert {\psi} \rangle} &\ge {\left\lvert{{\langle {\psi} \rvert} (\Delta B)^\dagger A {\lvert {\psi} \rangle}}\right\rvert}^2.\end{aligned}

These are expectation values, and allow us to use 3.4 to show

\begin{aligned}\left\langle{{(\Delta A)^2 }}\right\rangle \left\langle{{(\Delta B)^2 }}\right\rangle&\ge {\left\lvert{ \left\langle{{\Delta B \Delta A }}\right\rangle }\right\rvert}^2 \\ &= \frac{1}{{4}} {\left\lvert{\left\langle{{\left[{B},{A}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

For $A = X$, and $B = P$, this is

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4}\end{aligned} \hspace{\stretch{1}}(3.6)

Hmm. This doesn’t look like it is quite the result that I expected? We have $\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle$ instead of $\left\langle{{\Delta X }}\right\rangle^2 \left\langle{{\Delta P}}\right\rangle^2$?

Let’s step back slightly. Without introducing the Schwarz inequality the result 3.4 of the commutator manipulation, and $\left[{X},{P}\right] = i \hbar$ gives us

\begin{aligned}\left\langle{{\Delta X \Delta P }}\right\rangle^2 &\ge\frac{\hbar^2}{4} ,\end{aligned}

and taking roots we have

\begin{aligned}{\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.7)

Is this really what we were intended to show?

Attempting to answer this myself, I refer to [2], where I find he uses a loose notation for this too, and writes in his equation 3.36

\begin{aligned}(\Delta C)^2 = \left\langle{{ (C - \expectation{C})^2 }}\right\rangle = \left\langle{{C^2}}\right\rangle - \left\langle{{C}}\right\rangle^2\end{aligned}

This usage seems consistent with that, so I think that it is a reasonable assumption that uncertainty relation $\Delta x \Delta p \ge \hbar/2$ is really shorthand notation for the more cumbersome relation involving roots of the expectations of mean-square deviation operators

\begin{aligned}\sqrt{\left\langle{{ (X - \expectation{X})^2 }}\right\rangle}\sqrt{\left\langle{{ (P - \expectation{P})^2 }}\right\rangle} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.8)

This is in fact what was proved arriving at 3.6.

Ah ha! Found it. Referring to equation 2.93 in the text, I see that a lower case notation $\Delta x = \sqrt{(\Delta X)^2}$, was introduced. This explains what seemed like ambiguous notation … it was just tricky notation, perfectly well explained, but done in passing in the text in a somewhat hidden seeming way.

## 3.

This problem done by inspection.

TODO.

## 5. Hermitian radial differential operator.

Show that the operator

\begin{aligned}R = -i \hbar \frac{\partial {}}{\partial {r}},\end{aligned}

is not Hermitian, and find the constant $a$ so that

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right),\end{aligned}

is Hermitian.

For the first part of the problem we can show that

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*} \ne {\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle}.\end{aligned}

For the RHS we have

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}}\end{aligned}

and for the LHS we have

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \frac{\partial {\hat{\boldsymbol{\phi}}^{*}}}{\partial {r}} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( 2 r \hat{\boldsymbol{\psi}} + r^2 \frac{\partial {r}}{\partial {\hat{\boldsymbol{\psi}}}} \right)\hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

So, unless $r\hat{\boldsymbol{\psi}} = 0$, the operator $R$ is not Hermitian.

Moving on to finding the constant $a$ such that $T$ is Hermitian we calculate

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} T {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right) \hat{\boldsymbol{\phi}}^{*} \\ &= i \hbar \iiint dr d\theta d\phi \sin\theta \hat{\boldsymbol{\psi}} \left( r^2 \frac{\partial {}}{\partial {r}} + a r \right) \hat{\boldsymbol{\phi}}^{*} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + 2 r \hat{\boldsymbol{\psi}} - a r \hat{\boldsymbol{\psi}} \right) \hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

and

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} T {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + a r \hat{\boldsymbol{\psi}} \right)\end{aligned}

So, for $T$ to be Hermitian, we require

\begin{aligned}2 r - a r = a r.\end{aligned}

So $a = 1$, and our Hermitian operator is

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{r} \right).\end{aligned}

## 6. Radial directional derivative operator.

### Problem.

Show that

\begin{aligned}D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p},\end{aligned}

is Hermitian. Expand this operator in spherical coordinates. Compare result to problem 5.

### Solution.

Tackling the spherical coordinates expression of of the operator $D$, we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \boldsymbol{\nabla} \right) \Psi \\ &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + \left( \boldsymbol{\nabla} \Psi \right) \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \left(\boldsymbol{\nabla} \Psi\right) \\ &=\left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + 2 \hat{\mathbf{r}} \cdot \left( \boldsymbol{\nabla} \Psi \right).\end{aligned}

Here braces have been used to denote the extend of the operation of the gradient. In spherical polar coordinates, our gradient is

\begin{aligned}\boldsymbol{\nabla} \equiv \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}}+\hat{\boldsymbol{\theta}} \frac{1}{{r}} \frac{\partial {}}{\partial {\theta}}+\hat{\boldsymbol{\phi}} \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

This gets us most of the way there, and we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &=2 \frac{\partial {\Psi}}{\partial {r}} + \left( \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {r}}+\frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}+\frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\right) \Psi.\end{aligned}

Since ${\partial {\hat{\mathbf{r}}}}/{\partial {r}} = 0$, we are left with evaluating $\hat{\boldsymbol{\theta}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\theta}}$, and $\hat{\boldsymbol{\phi}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\phi}}$. To do so I chose to employ the (Geometric Algebra) exponential form of the spherical unit vectors [4]

\begin{aligned}I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_{2} \exp( I \mathbf{e}_3 \phi ) \\ \hat{\mathbf{r}} &= \mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta ) \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ).\end{aligned}

The partials of interest are then

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} &= \mathbf{e}_3 I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ) = \hat{\boldsymbol{\theta}},\end{aligned}

and

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} &= \frac{\partial {}}{\partial {\phi}} \mathbf{e}_3 \left( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta \right) \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= \sin\theta \hat{\boldsymbol{\phi}}.\end{aligned}

Only after computing these, did I find exactly these results for the partials of interest, in mathworld’s Spherical Coordinates page, which confirms these calculations. Note that a different angle convention is used there, so one has to exchange $\phi$, and $\theta$ and the corresponding unit vector labels.

Substitution back into our expression for the operator we have

\begin{aligned}D &= - 2 i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{{r}} \right),\end{aligned}

an operator that is exactly twice the operator of problem 5, already shown to be Hermitian. Since the constant numerical scaling of a Hermitian operator leaves it Hermitian, this shows that $D$ is Hermitian as expected.

### $\hat{\boldsymbol{\theta}}$ directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\theta}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}}.\end{aligned}

This time we need the ${\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\theta}}$, ${\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\phi}}$ partials, which are

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}} &=\mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=-\mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=- \hat{\mathbf{r}}.\end{aligned}

This has no $\hat{\boldsymbol{\theta}}$ component, so does not contribute to $\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}}$. Noting that

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} &= -\mathbf{e}_1 \exp( I \mathbf{e}_3 \phi ) = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}},\end{aligned}

the $\phi$ partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\mathbf{e}_1 \mathbf{e}_2 \left( \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta )+\hat{\boldsymbol{\phi}} I \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \right) \\ &=\hat{\boldsymbol{\phi}} \left( \exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}}\right),\end{aligned}

with $\hat{\boldsymbol{\phi}}$ component

\begin{aligned}\hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\left\langle{{\exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}} }}\right\rangle \\ &=\cos\theta + \mathbf{e}_3 \cdot \hat{\boldsymbol{\phi}} \sin\theta \\ &=\cos\theta.\end{aligned}

Assembling the results, and labeling this operator $\Theta$ we have

\begin{aligned}\Theta &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \mathbf{p} \right) \\ &=-i \hbar \frac{1}{{r}} \left( \frac{\partial {}}{\partial {\theta}} + \frac{1}{{2}} \cot\theta \right).\end{aligned}

It would be reasonable to expect this operator to also be Hermitian, and checking this explicitly by comparing
${\langle {\Phi} \rvert} \Theta {\lvert {\Psi} \rangle}^{*}$ and ${\langle {\Psi} \rvert} \Theta {\lvert {\Phi} \rangle}$, shows that this is in fact the case.

### $\hat{\boldsymbol{\phi}}$ directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\phi}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}}.\end{aligned}

This time we need the ${\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\theta}}$, ${\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\phi}} = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}}$ partials. The $\theta$ partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}} &=\frac{\partial {}}{\partial {\theta}} \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= 0.\end{aligned}

We conclude that $\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} = 0$, and expect that we have one more Hermitian operator

\begin{aligned}\Phi &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \mathbf{p} \right) \\ &=-i \hbar \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

It is simple to confirm that this is Hermitian since the integration by parts does not involve any of the volume element. In fact, any operator $-i\hbar f(r,\theta) {\partial {}}/{\partial {\phi}}$ would also be Hermitian, including the simplest case $-i\hbar {\partial {}}/{\partial {\phi}}$. Have to dig out my Bohm text again, since I seem to recall that one used in the spherical Harmonics chapter.

### A note on the Hermitian test and Dirac notation.

I’ve been a bit loose with my notation. I’ve stated that my demonstrations of the Hermitian nature have been done by showing

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} - {\langle {\psi} \rvert} A {\lvert {\phi} \rangle} = 0.\end{aligned}

However, what I’ve actually done is show that

\begin{aligned}\left( \int d^3 \mathbf{x} \phi^{*} (\mathbf{x}) A(\mathbf{x}) \psi(\mathbf{x}) \right)^{*} - \int d^3 \mathbf{x} \psi^{*} (\mathbf{x}) A(\mathbf{x}) \phi(\mathbf{x}) = 0.\end{aligned}

To justify this note that

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\left( \iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\phi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\psi}}\right\rangle \right)^{*} \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \phi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A^{*}(\mathbf{s}) \psi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \phi(\mathbf{r}) A^{*}(\mathbf{r}) \psi(\mathbf{r}),\end{aligned}

and

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\psi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\phi}}\right\rangle \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} {\langle {\mathbf{r}} \rvert} \psi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A(\mathbf{s}) \phi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \psi(\mathbf{r}) A(\mathbf{r}) \phi(\mathbf{r}).\end{aligned}

Working backwards one sees that the comparison of the wave function integrals in explicit inner product notation is sufficient to demonstrate the Hermitian property.

## 7. Some commutators.

### 7. Problem.

For $D$ in problem 6, obtain

\begin{itemize}
\item i) $[D, x_i]$
\item ii) $[D, p_i]$
\item iii) $[D, L_i]$, where $L_i = \mathbf{e}_i \cdot (\mathbf{r} \times \mathbf{p})$.
\item iv) Show that $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} = e^\alpha x_i$
\end{itemize}

### 7. Expansion of $\left[{D},{x_i}\right]$.

While expressing the operator as $D = -2 i \hbar (1/r) (1 + \partial_r)$ has less complexity than the $D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p}$, since no operation on $\hat{\mathbf{r}}$ is required, this doesn’t look particularly convenient for use with Cartesian coordinates. Slightly better perhaps is

\begin{aligned}D = -2 i\hbar \frac{1}{{r}}( \mathbf{r} \cdot \boldsymbol{\nabla} + 1)\end{aligned}

\begin{aligned}[D, x_i] \Psi&=D x_i \Psi - x_i D \Psi \\ &=-2 i \hbar \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot (\boldsymbol{\nabla} x_i) \Psi-2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \mathbf{e}_i \Psi.\end{aligned}

So this first commutator is:

\begin{aligned}[D, x_i] = -2 i \hbar \frac{x_i}{r}.\end{aligned}

### 7. Alternate expansion of $\left[{D},{x_i}\right]$.

Let’s try this instead completely in coordinate notation to verify. I’ll use implicit summation for repeated indexes, and write $\partial_k = \partial/\partial x_k$. A few intermediate results will be required

\begin{aligned}\partial_k \frac{1}{{r}} &= \partial_k (x_m x_m)^{-1/2} \\ &= -\frac{1}{{2}} 2 x_k (x_m x_m)^{-3/2} \\ \end{aligned}

Or

\begin{aligned}\partial_k \frac{1}{{r}} &= - \frac{x_k}{r^3}\end{aligned} \hspace{\stretch{1}}(3.9)

\begin{aligned}\partial_k \frac{x_i}{r}&=\frac{\delta_{ik}}{r} - \frac{ x_i }{r^3}\end{aligned} \hspace{\stretch{1}}(3.10)

\begin{aligned}\partial_k \frac{x_k}{r}&=\frac{3}{r} - \frac{ x_k }{r^3}\end{aligned} \hspace{\stretch{1}}(3.11)

The action of the momentum operators on the coordinates is

\begin{aligned}p_k x_i \Psi &=-i \hbar \partial_k x_i \Psi \\ &=-i \hbar \left( \delta_{ik} + x_i \partial_k \right) \Psi \\ &=-i \hbar \delta_{ik} + x_i p_k\end{aligned}

\begin{aligned}p_k x_k \Psi &=-i \hbar \partial_k x_k \Psi \\ &=-i \hbar \left( 3 + x_k \partial_k \right) \Psi\end{aligned}

Or

\begin{aligned}p_k x_i &= -i \hbar \delta_{ik} + x_i p_k \\ p_k x_k &= - 3 i \hbar + x_k p_k \end{aligned} \hspace{\stretch{1}}(3.12)

And finally

\begin{aligned}p_k \frac{1}{{r}} \Psi&=(p_k \frac{1}{{r}}) \Psi+ \frac{1}{{r}} p_k \Psi \\ &=-i \hbar \left( -\frac{x_k}{r^3}\right) \Psi+ \frac{1}{{r}} p_k \Psi \\ \end{aligned}

So

\begin{aligned}p_k \frac{1}{{r}} &= i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k\end{aligned} \hspace{\stretch{1}}(3.14)

We can use these to rewrite $D$

\begin{aligned}D &= p_k \frac{x_k}{r} + \frac{x_k}{r} p_k \\ &= p_k x_k \frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= \left( - 3 i \hbar + x_k p_k \right)\frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= - \frac{3 i \hbar}{r} + x_k \left( i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k \right) + \frac{x_k}{r} p_k \\ \end{aligned}

\begin{aligned}D &= \frac{2}{r} ( -i \hbar + x_k p_k )\end{aligned} \hspace{\stretch{1}}(3.15)

This leaves us in the position to compute the commutator

\begin{aligned}\left[{D},{x_i}\right]&= \frac{2}{r} ( -i \hbar + x_k p_k ) x_i- \frac{2 x_i}{r} ( -i \hbar + x_k p_k ) \\ &= \frac{2}{r} x_k ( -i \hbar \delta_{ik} + x_i p_k )- \frac{2 x_i}{r} x_k p_k \\ &= -\frac{2 i \hbar x_i}{r} \end{aligned}

So, unless I’m doing something fundamentally wrong, the same way in both methods, this appears to be the desired result. I question my answer since utilizing this for the later computation of $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}$ did not yield the expected answer.

### 7. $[D, p_i]$

\begin{aligned}\left[{D},{p_i}\right] &=-\frac{2 i \hbar }{r} ( 1 + x_k p_k ) p_i+2 i \hbar p_i \frac{1}{{r}} ( 1 + x_k p_k ) \\ &=-\frac{2 i \hbar }{r} \left(p_i + x_k p_k p_i-\left( i \hbar \frac{x_i}{r^2} + p_i \right) ( 1 + x_k p_k ) \right) \\ &=-\frac{2 i \hbar }{r} \left(x_k p_k p_i- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k -\left( -i \hbar \delta_{ki} + x_k p_i \right) p_k \right) \\ &=-\frac{2 i \hbar }{r} \left(- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k + i \hbar p_i\right) \\ &=-\frac{i \hbar}{r} \left( \frac{x_i}{r} D+ 2 i \hbar p_i\right) \qquad\square\end{aligned}

If there is some significance to this expansion, other than to get a feel for operator manipulation, it escapes me.

### 7. $[D, L_i]$

To expand $[D, L_i]$, it will be sufficient to consider any specific index $i \in \{1,2,3\}$ and then utilize cyclic permutation of the indexes in the result to generalize. Let’s pick $i=1$, for which we have

\begin{aligned}L_1 = x_2 p_3 - x_3 p_2 \end{aligned}

It appears we will want to know

\begin{aligned}p_m D &=-2 i \hbar p_m \frac{1}{{r}} ( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m\right)( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m+i \hbar \frac{x_m x_k }{r^3} p_k + \frac{1}{{r}}p_m x_k p_k \right) \\ &=-\frac{2 i \hbar}{r} \left(i \hbar \frac{x_m}{r^2} + p_m+i \hbar \frac{x_m x_k }{r^2} p_k -i \hbar p_m + x_k p_m p_k \right)\end{aligned}

and we also want

\begin{aligned}D x_m &=- \frac{2 i \hbar }{r} ( 1 + x_k p_k ) x_m \\ &=- \frac{2 i \hbar }{r} ( x_m + x_k ( -i \hbar \delta_{km} + x_m p_k ) ) \\ &=- \frac{2 i \hbar }{r} ( x_m - i \hbar x_m + x_m x_k p_k ) \\ \end{aligned}

This also happens to be $D x_m = x_m D + \frac{2 (i \hbar)^2 x_m }{r}$, but does that help at all?

Assembling these we have

\begin{aligned}\left[{D},{L_1}\right]&=D x_2 p_3 - D x_3 p_2 - x_2 p_3 D + x_3 p_2 D \\ &=- \frac{2 i \hbar }{r} ( x_2 - i \hbar x_2 + x_2 x_k p_k ) p_3+ \frac{2 i \hbar }{r} ( x_3 - i \hbar x_3 + x_3 x_k p_k ) p_2 \\ &+\frac{2 i \hbar x_2 }{r} \left(i \hbar \frac{x_3}{r^2} + p_3+i \hbar \frac{x_3 x_k }{r^2} p_k -i \hbar p_3 + x_k p_3 p_k \right) \\ &-\frac{2 i \hbar x_3 }{r} \left(i \hbar \frac{x_2}{r^2} + p_2+i \hbar \frac{x_2 x_k }{r^2} p_k -i \hbar p_2 + x_k p_2 p_k \right) \\ \end{aligned}

With a bit of brute force it is simple enough to verify that all these terms mystically cancel out, leaving us zero

\begin{aligned}\left[{D},{L_1}\right] = 0\end{aligned}

There surely must be an easier way to demonstrate this. Likely utilizing the commutator relationships derived earlier.

### 7. $e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}$

We will need to evaluate $D^k x_i$. We have the first power from our commutator relation

\begin{aligned}D x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)\end{aligned}

A successive application of this operator therefore yields

\begin{aligned}D^2 x_i &= D x_i \left( D - \frac{ 2 i \hbar }{r} \right) \\ &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^2 \\ \end{aligned}

So we have

\begin{aligned}D^k x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^k \\ \end{aligned}

This now preps us to expand the first product in the desired exponential sandwich

\begin{aligned}e^{i\alpha D/\hbar} x_i&=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i D}{\hbar} \right)^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k D^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k x_i \\ &= x_i e^{ \frac{i \alpha }{\hbar} \left( D - \frac{ 2 i \hbar }{r} \right) } \\ &= x_i e^{ 2 \alpha /r } e^{ i \alpha D /\hbar }.\end{aligned}

The exponential sandwich then produces

\begin{aligned}e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} &= e^{2 \alpha/r } x_i \end{aligned}

Note that this isn’t the value we are supposed to get. Either my value for $D x_i$ is off by a factor of $2/r$ or the problem in the text contains a typo.

## 8. Reduction of some commutators using the fundamental commutator relation.

Using the fundamental commutation relation

\begin{aligned}\left[{p},{x}\right] = -i \hbar,\end{aligned}

which we can also write as

\begin{aligned}p x = x p -i \hbar,\end{aligned}

expand $\left[{x},{p^2}\right]$, $\left[{x^2},{p}\right]$, and $\left[{x^2},{p^2}\right]$.

The first is

\begin{aligned}\left[{x},{p^2}\right] &= x p^2 - p^2 x \\ &= x p^2 - p (p x) \\ &= x p^2 - p (x p -i \hbar) \\ &= x p^2 - (x p -i \hbar) p + i \hbar p \\ &= 2 i \hbar p \\ \end{aligned}

The second is

\begin{aligned}\left[{x^2},{p}\right] &= x^2 p - p x^2 \\ &= x^2 p - (x p - i\hbar) x \\ &= x^2 p - x (x p - i\hbar) + i \hbar x \\ &= 2 i \hbar x \\ \end{aligned}

Note that it is helpful for the last reduction of this problem to observe that we can write this as

\begin{aligned}p x^2 &= x^2 p - 2 i \hbar x \\ \end{aligned}

Finally for this last we have

\begin{aligned}\left[{x^2},{p^2}\right] &= x^2 p^2 - p^2 x^2 \\ &= x^2 p^2 - p (x^2 p - 2 i \hbar x) \\ &= x^2 p^2 - (x^2 p - 2 i \hbar x) p + 2 i \hbar (x p - i \hbar) \\ &= 4 i \hbar x p - 2 (i \hbar)^2 \\ \end{aligned}

That’s about as reduced as this can be made, but it is not very tidy looking. From this point we can simplify it a bit by factoring

\begin{aligned}\left[{x^2},{p^2}\right] &= 4 i \hbar x p - 2 (i \hbar)^2 \\ &= 2 i \hbar ( 2 x p - i \hbar) \\ &= 2 i \hbar ( x p + p x ) \\ &= 2 i \hbar \left\{{x},{p}\right\} \end{aligned}

## 9. Finite displacement operator.

### 9. Part I.

For

\begin{aligned}F(d) = e^{-i p d/\hbar},\end{aligned}

the first part of this problem is to show that

\begin{aligned}\left[{x},{F(d)}\right] = x F(d) - F(d) x = d F(d)\end{aligned}

We need to evaluate

\begin{aligned}e^{-i p d/\hbar} x = \sum_{k=0}^\infty \frac{1}{{k!}} \left( \frac{-i p d}{\hbar} \right)^k x.\end{aligned}

To do so requires a reduction of $p^k x$. For $k=2$ we have

\begin{aligned}p^2 x &= p ( x p - i\hbar ) \\ &= ( x p - i\hbar ) p - i \hbar p \\ &= x p^2 - 2 i\hbar p.\end{aligned}

For the cube we get $p^3 x = x p^3 - 3 i\hbar p^2$, supplying confirmation of an induction hypothesis $p^k x = x p^k - k i\hbar p^{k-1}$, which can be verified

\begin{aligned}p^{k+1} x &= p ( x p^k - k i \hbar p^{k-1}) \\ &= (x p - i\hbar) p^k - k i \hbar p^k \\ &= x p^{k+1} - (k+1) i \hbar p^k \qquad\square\end{aligned}

For our exponential we then have

\begin{aligned}e^{-i p d/\hbar} x &= x + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i d}{\hbar} \right)^k (x p^k - k i\hbar p^{k-1}) \\ &= x e^{-i p d /\hbar }+ \sum_{k=1}^\infty \frac{1}{{(k-1)!}} \left( \frac{-i p d}{\hbar} \right)^{k-1} (-i d/\hbar)(- i\hbar) \\ &= ( x - d ) e^{-i p d /\hbar }.\end{aligned}

Put back into our commutator we have

\begin{aligned}\left[{x},{e^{-i p d/\hbar}}\right] = d e^{-ip d/\hbar},\end{aligned}

completing the proof.

### 9. Part II.

For state ${\lvert {\alpha} \rangle}$ with ${\lvert {\alpha_d} \rangle} = F(d) {\lvert {\alpha} \rangle}$, show that the expectation values satisfy

\begin{aligned}\left\langle{{X}}\right\rangle_d = \left\langle{{X}}\right\rangle + d\end{aligned}

\begin{aligned}\left\langle{{X}}\right\rangle_d &={\langle {\alpha_d} \rvert} X {\lvert {\alpha_d} \rangle} \\ &=\iint dx' dx'' \left\langle{{\alpha_d}} \vert {{x'}}\right\rangle {\langle {x'} \rvert} X {\lvert {x''} \rangle} \left\langle{{x''}} \vert {{\alpha_d}}\right\rangle \\ &=\iint dx' dx'' \alpha_d^{*}{x'} \delta(x' -x'') x' \alpha_d(x'') \\ &=\int dx' \alpha_d^{*}(x') x' \alpha_d(x') \\ \end{aligned}

But

\begin{aligned}\alpha_d(x') &= \exp\left( -\frac{i d }{\hbar} (-i\hbar) \frac{\partial}{\partial x'} \right) \alpha(x') \\ &= e^{- d \frac{\partial}{\partial x'} } \alpha(x') \\ &= \alpha(x' - d),\end{aligned}

so our position expectation is

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx' \alpha^{*}(x' -d) x' \alpha(x'- d).\end{aligned}

A change of variables $x = x' -d$ gives us

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx \alpha^{*}(x) (x + d) \alpha(x) \\ \left\langle{{X}}\right\rangle + d+ d \int dx \alpha^{*}{x} \alpha(x) \qquad\square\end{aligned}

## 10. Hamiltonian position commutator and double commutator

For

\begin{aligned}H = \frac{1}{{2m}} p^2 + V(x)\end{aligned}

calculate $\left[{H},{x}\right]$, and $\left[{\left[{H},{x}\right]},{x}\right]$.

These are

\begin{aligned}\left[{H},{x}\right]&=\frac{1}{{2m}} p^2 x + V(x) x -\frac{1}{{2m}} x p^2 - x V(x) \\ &=\frac{1}{{2m}} p ( x p - i \hbar) -\frac{1}{{2m}} x p^2 \\ &=\frac{1}{{2m}} \left( ( x p - i \hbar) p -i \hbar p \right) -\frac{1}{{2m}} x p^2 \\ &=-\frac{i\hbar p}{m} \\ \end{aligned}

and

\begin{aligned}\left[{\left[{H},{x}\right]},{x}\right]&=-\frac{i\hbar }{m} \left[{p},{x}\right] \\ &=\frac{(-i\hbar)^2 }{m} \\ &=-\frac{\hbar^2 }{m} \\ \end{aligned}

We also have to show that

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 = \frac{\hbar^2}{2m}\end{aligned}

Expanding the absolute value in terms of conjugates we have

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 &= \sum_k (E_k -E_n) {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x E_k {\lvert {k} \rangle} -{\langle {k} \rvert} x E_n {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {n} \rvert} x H {\lvert {k} \rangle} {\langle {k} \rvert} x {\lvert {n} \rangle} - {\langle {n} \rvert} x {\lvert {k} \rangle} {\langle {k} \rvert} x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x H x {\lvert {n} \rangle} - {\langle {n} \rvert} x x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x \left[{H},{x}\right] {\lvert {n} \rangle} \\ &= -\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} \\ \end{aligned}

It is not obvious where to go from here. Taking the clue from the problem that the result involves the double commutator, we have

\begin{aligned}- \frac{\hbar^2}{m}&={\langle {n} \rvert} \left[{\left[{H},{x}\right]},{x}\right] {\lvert {n} \rangle} \\ &={\langle {n} \rvert} H x^2 - 2 x H x + x^2 H {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} x H x {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} ( -\left[{H},{x}\right] + H x) x {\lvert {n} \rangle} \\ &=2 {\langle {n} \rvert} \left[{H},{x}\right] x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} p x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p - i \hbar {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} +\frac{2 (i \hbar)^2}{m} \end{aligned}

So, somewhat flukily by working backwards, with a last rearrangement, we now have

\begin{aligned}-\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} &= \frac{\hbar^2}{m} -\frac{\hbar^2}{2 m} \\ &= \frac{\hbar^2}{2 m}\end{aligned}

Substitution above gives the desired result. This is extremely ugly, and doesn’t follow any sort of logical progression. Is there a good way to sequence this proof?

## 11. Another double commutator.

### Attempt 1. Incomplete.

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{r})\end{aligned}

use $\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]$ to obtain

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

First evaluate the commutators. The first is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right]\end{aligned}

The Laplacian applied to this exponential is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } \Psi&=\partial_m \partial_m e^{i k_n x_n } \Psi \\ &=\partial_m (i k_m e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + e^{i \mathbf{k} \cdot \mathbf{r} } \partial_m \Psi ) \\ &=- \mathbf{k}^2 e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + i e^{i \mathbf{k} \cdot \mathbf{r} } \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ i e^{i \mathbf{k} \cdot \mathbf{r}} \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ e^{i \mathbf{k} \cdot \mathbf{r}} \boldsymbol{\nabla}^2 \Psi\end{aligned}

Factoring out the exponentials this is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } &=e^{i \mathbf{k}\cdot \mathbf{r}} \left(- \mathbf{k}^2 + 2 i \mathbf{k} \cdot \boldsymbol{\nabla} + \boldsymbol{\nabla}^2 \right),\end{aligned}

and in terms of $\mathbf{p}$, we have

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k}\cdot \mathbf{r}} &= e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} + \mathbf{p}^2 \right)=e^{i \mathbf{k}\cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2\end{aligned}

So, finally, our first commutator is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}}e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right)\end{aligned} \hspace{\stretch{1}}(3.16)

The double commutator is then

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i\mathbf{k} \cdot \mathbf{r}} \frac{\hbar \mathbf{k}}{m} \cdot \left( \mathbf{p} e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} \mathbf{p} \right)\end{aligned}

To simplify this we want

\begin{aligned}\mathbf{k} \cdot \boldsymbol{\nabla} e^{-i \mathbf{k} \cdot \mathbf{r}} \Psi &=k_n \partial_n e^{-i k_m x_m } \Psi \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} }\left(k_n (-i k_n) \Psi + k_n \partial_n \Psi \right) \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} } \left( -i \mathbf{k}^2 + \mathbf{k} \cdot \boldsymbol{\nabla} \right) \Psi\end{aligned}

The double commutator is then left with just

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=- \frac{1}{{m}} (\hbar \mathbf{k})^2 \end{aligned} \hspace{\stretch{1}}(3.17)

Now, returning to the energy expression

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2&=\sum_n (E_n - E_s) {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=\sum_n {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} \left[{H},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right){\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} (\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ &=\frac{(\hbar\mathbf{k})^2}{2m} + \frac{1}{{m}} {\langle {s} \rvert} (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

I can’t figure out what to do with the $\hbar \mathbf{k} \cdot \mathbf{p}$ expectation, and keep going around in circles.

I figure there is some trick related to the double commutator, so expanding the expectation of that seems appropriate

\begin{aligned}-\frac{1}{{m}} (\hbar \mathbf{k})^2 &={\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &={\langle {s} \rvert} \left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right] e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}}\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &=\frac{1}{{2m }} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) {\lvert {s} \rangle} \\ &=\frac{1}{{m}} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-\hbar \mathbf{k} \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

### Attempt 2.

I was going in circles above. With the help of betel on physicsforums, I got pointed in the right direction. Here’s a rework of this problem from scratch, also benefiting from hindsight.

Our starting point is the same, with the evaluation of the first commutator

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right].\end{aligned} \hspace{\stretch{1}}(3.18)

To continue we need to know how the momentum operator acts on an exponential of this form

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} \Psi&=-i \hbar \mathbf{e}_m \partial_m e^{\pm i k_n x_n } \Psi \\ &=e^{\pm i \mathbf{k} \cdot \mathbf{r}} \left( -i \hbar (\pm i \mathbf{e}_m k_m ) \Psi -i \hbar \mathbf{e}_m \partial_m \Psi\right).\end{aligned}

This gives us the helpful relationship

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} = e^{\pm i \mathbf{k} \cdot \mathbf{r}} (\mathbf{p} \pm \hbar \mathbf{k}).\end{aligned} \hspace{\stretch{1}}(3.19)

Squared application of the momentum operator on the positive exponential found in the first commutator 3.18, gives us

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k} \cdot \mathbf{r}} = e^{i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2 = e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p} + \mathbf{p}^2),\end{aligned} \hspace{\stretch{1}}(3.20)

with which we can evaluate this first commutator.

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}).\end{aligned} \hspace{\stretch{1}}(3.21)

For the double commutator we have

\begin{aligned}2m \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) \\ &=e^{i \mathbf{k} \cdot \mathbf{r}} 2 (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-2 \hbar \mathbf{k} \cdot \mathbf{p} \\ &=2 \hbar \mathbf{k} \cdot (\mathbf{p} - \hbar \mathbf{k})-2 \hbar \mathbf{k} \cdot \mathbf{p},\end{aligned}

so for the double commutator we have just a scalar

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&= -\frac{(\hbar \mathbf{k})^2}{m}.\end{aligned} \hspace{\stretch{1}}(3.22)

Now consider the expectation of this double commutator, expanded with some unintuitive steps that have been motivated by working backwards

\begin{aligned}{\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle}&={\langle {s} \rvert} 2 H - e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k} \cdot \mathbf{r}} H- 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle}- {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n E_s {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}- E_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n (E_s - E_n){\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

By 3.22, we have completed the problem

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2 &= \frac{(\hbar \mathbf{k})^2}{2m}.\end{aligned} \hspace{\stretch{1}}(3.23)

There is one subtlety above that is worth explicit mention before moving on, in particular, I did not find it intuitive that the following was true

\begin{aligned}{\langle {s} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} + e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.24)

However, observe that both of these exponential sandwich operators $e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}$, and $e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}}$ are Hermitian, since we have for example

\begin{aligned}(e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger&= (e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger H^\dagger (e^{i \mathbf{k} \cdot \mathbf{r}})^\dagger \\ &= e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}\end{aligned}

Also observe that these operators are both complex conjugates of each other, and with $\mathbf{k} \cdot \mathbf{r} = \alpha$ for short, can be written

\begin{aligned}e^{i \alpha} H e^{-i \alpha}&= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha+ i\sin\alpha H \cos \alpha -i \cos\alpha H \sin\alpha \\ e^{-i \alpha} H e^{i \alpha} &= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha- i\sin\alpha H \cos \alpha +i \cos\alpha H \sin\alpha\end{aligned} \hspace{\stretch{1}}(3.25)

Because $H$ is real valued, and the expectation value of a Hermitian operator is real valued, none of the imaginary terms can contribute to the expectation values, and in the summation of 3.24 we can thus pick and double either of the exponential sandwich terms, as desired.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[4] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf.