Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Desai Chapter II notes and problems.

Posted by peeterjoot on October 9, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter II notes for [1].

Notes

Canonical Commutator

Based on the canonical relationship [X,P] = i\hbar, and \left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x'-x), Desai determines the form of the P operator in continuous space. A consequence of this is that the matrix element of the momentum operator is found to have a delta function specification

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right).\end{aligned}

In particular the matrix element associated with the state {\lvert {\phi} \rangle} is found to be

\begin{aligned}{\langle {x'} \rvert} P {\lvert {\phi} \rangle} = -i \hbar \frac{d}{dx'} \phi(x').\end{aligned}

Compare this to [2], where this last is taken as the definition of the momentum operator, and the relationship to the delta function is not spelled out explicitly. This canonical commutator approach, while more abstract, seems to have less black magic involved in the setup. We do require the commutator relationship [X,P] = i\hbar to be pulled out of a magic hat, but at least the magic show is a structured one based on a small set of core assumptions.

It will likely be good to come back to this later when trying to reconcile this new (for me) Dirac notation with the more basic notation I’m already comfortable with. When trying to compare the two, it will be good to note that there is a matrix element that is implied in the more old fashioned treatment in a book such as [3].

There is one fundamental assumption that appears to be made in this section that isn’t justified by anything except the end result. That is the assumption that P is a derivative like operator, acting with a product rule action. That’s used to obtain (2.28) and is a fairly black magic operation. This same assumption, is also hiding, somewhat sneakily, in the manipulation for (2.44).

If one has to make that assumption that P is a derivative like operator, I don’t feel this method of introducing it is any less arbitrary seeming. It is still pulled out of a magic hat, only because the answer is known ahead of time. The approach of [3], where the derivative nature is presented as consequence of transforming (via Fourier transforms) from the position to the momentum representation, seems much more intuitive and less arbitrary.

Generalized momentum commutator.

It is stated that

\begin{aligned}[P,X^n] = - n i \hbar X^{n-1}.\end{aligned}

Let’s prove this. The n=1 case is the canonical commutator, which is assumed. Is there any good way to justify that from first principles, as presented in the text? We have to prove this for n, given the relationship for n-1. Expanding the nth power commutator we have

\begin{aligned}[P,X^n] &= P X^n - X^n P \\ &= P X^{n-1} X - X^{n } P \\ \end{aligned}

Rearranging the n-1 result we have

\begin{aligned}P X^{n-1} = X^{n-1} P - (n-1) i \hbar X^{n-2},\end{aligned}

and can insert that in our [P,X^n] expansion for

\begin{aligned}[P,X^n] &= \left( X^{n-1} P - (n-1) i \hbar X^{n-2} \right)X - X^{n } P \\ &= X^{n-1} (PX) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= X^{n-1} ( X P - i\hbar) - (n-1) i \hbar X^{n-1} - X^{n } P \\ &= -X^{n-1} i\hbar - (n-1) i \hbar X^{n-1} \\ &= -n i \hbar X^{n-1} \qquad\square\end{aligned}

Uncertainty principle.

The origin of the statement [\Delta A, \Delta B] = [A, B] is not something that seemed obvious. Expanding this out however is straightforward, and clarifies things. That is

\begin{aligned}[\Delta A, \Delta B] &= (A - \left\langle{{A}}\right\rangle) (B - \left\langle{{B}}\right\rangle) - (B - \left\langle{{B}}\right\rangle) (A - \left\langle{{A}}\right\rangle) \\ &= \left( A B - \left\langle{{A}}\right\rangle B - \left\langle{{B}}\right\rangle A +\left\langle{{A}}\right\rangle \left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangle A - \left\langle{{A}}\right\rangle B +\left\langle{{B}}\right\rangle \left\langle{{A}}\right\rangle \right) \\ &= A B - B A \\ &= [A, B]\qquad\square\end{aligned}

Size of a particle

I found it curious that using \Delta x \Delta p \approx \hbar instead of \Delta x \Delta p \ge \hbar/2, was sufficient to obtain the hydrogen ground state energy E_{\text{min}} = -e^2/2 a_0, without also having to do any factor of two fudging.

Space displacement operator.

Initial notes.

I’d be curious to know if others find the loose use of equality for approximation after approximation slightly disturbing too?

I also find it curious that (2.140) is written

\begin{aligned}D(x) = \exp\left( -i \frac{P}{\hbar} x \right),\end{aligned}

and not

\begin{aligned}D(x) = \exp\left( -i x \frac{P}{\hbar} \right).\end{aligned}

Is this intentional? It doesn’t seem like P ought to be acting on x in this case, so why order the terms that way?

Expanding the application of this operator, or at least its first order Taylor series, is helpful to get an idea about this. Doing so, with the original \Delta x' value used in the derivation of the text we have to start

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle} &\approx \left(1 - i \frac{P}{\hbar} \Delta x' \right) {\lvert {\phi} \rangle} \\ &= \left(1 - i \left( -i \hbar \delta(x -x') \frac{\partial}{\partial x} \right) \frac{1}{{\hbar}} \Delta x'\right) {\lvert {\phi} \rangle} \\ \end{aligned}

This shows that the \Delta x factor can be commuted with the momentum operator, as it is not a function of x', so the question of P x, vs x P above appears to be a non-issue.

Regardless of that conclusion, it seems worthy to continue an attempt at expanding this shift operator action on the state vector. Let’s do so, but do so by computing the matrix element {\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}. That is

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle} &\approx\left\langle{{x'}} \vert {{\phi}}\right\rangle - {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {\phi} \rangle} \\ &=\phi(x') - \int {\langle {x'} \rvert} \delta(x -x') \frac{\partial}{\partial x} \Delta x' {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \int \delta(x -x') \frac{\partial}{\partial x} \left\langle{{x'}} \vert {{\phi}}\right\rangle dx' \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \left\langle{{x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x') - \Delta x' \frac{\partial}{\partial x'} \phi(x') \\ \end{aligned}

This is consistent with the text. It is interesting, and initially surprising that the space displacement operator when applied to a state vector introduces a negative shift in the wave function associated with that state vector. In the derivation of the text, this was associated with the use of integration by parts (ie: due to the sign change in that integration). Here we see it sneak back in, due to the i^2 once the momentum operator is expanded completely.

As last note and question. The first order Taylor approximation of the momentum operator was used. If the higher order terms are retained, as in

\begin{aligned}\exp\left( -i \Delta x' \frac{P}{\hbar} \right) = 1 - \Delta x' \delta(x -x') \frac{\partial}{\partial x} + \frac{1}{{2}} \left( - \Delta x' \delta(x -x') \frac{\partial}{\partial x} \right)^2 + \cdots,\end{aligned}

then how does one evaluate a squared delta function (or Nth power)?

Talked to Vatche about this after class. The key to this is sequential evaluation. Considering the simple case for P^2, we evaluate one operator at a time, and never actually square the delta function

\begin{aligned}{\langle {x'} \rvert} P^2 {\lvert {\phi} \rangle} \end{aligned}

I was also questioned why I was including the delta function at this point. Why would I do that. Thinking further on this, I see that isn’t a reasonable thing to do. That delta function only comes into the mix when one takes the matrix element of the momentum operator as in

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x-x') \frac{d}{dx'}. \end{aligned}

This is very much like the fact that the delta function only shows up in the continuous representation in other context where one has matrix elements. The most simple example of which is just

\begin{aligned}\left\langle{{x'}} \vert {{x}}\right\rangle = \delta(x-x').\end{aligned}

I also see now that the momentum operator is directly identified with the derivative (no delta function) in two other places in the text. These are equations (2.32) and (2.46) respectively:

\begin{aligned}P(x) &= -i \hbar \frac{d}{dx} \\ P &= -i \hbar \frac{d}{dX}.\end{aligned}

In the first, (2.32), I thought the P(x) was somehow different, just a helpful expression found along the way, but now it occurs to me that this was intended to be an unambiguous representation of the momentum operator itself.

A second try.

Getting a feel for this Dirac notation takes a bit of adjustment. Let’s try evaluating the matrix element for the space displacement operator again, without abusing the notation, or thinking that we have a requirement for squared delta functions and other weirdness. We start with

\begin{aligned}D(\Delta x') {\lvert {\phi} \rangle}&=e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {\phi} \rangle} \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}\left\langle{{x}} \vert {{\phi}}\right\rangle \\ &=\int dx e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle} \phi(x).\end{aligned}

Now, to evaluate e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}, we can expand in series

\begin{aligned}e^{-\frac{i P \Delta x'}{\hbar}} {\lvert {x} \rangle}&={\lvert {x} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k P^k {\lvert {x} \rangle}.\end{aligned}

It is tempting to left multiply by {\langle {x'} \rvert} and commute that past the P^k, then write P^k = -i \hbar d/dx. That probably produces the correct result, but is abusive of the notation. We can still left multiply by {\langle {x'} \rvert}, but to be proper, I think we have to leave that on the left of the P^k operator. This yields

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\int dx \left( \left\langle{{x'}} \vert {{x}}\right\rangle + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k {\langle {x'} \rvert} P^k {\lvert {x} \rangle}\right) \phi(x) \\ &=\int dx \delta(x'- x) \phi(x)+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k \int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x).\end{aligned}

The first integral is just \phi(x'), and we are left with integrating the higher power momentum matrix elements, applied to the wave function \phi(x). We can proceed iteratively to expand those integrals

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} {\langle {x''} \rvert} P {\lvert {x} \rangle} \phi(x) \\ \end{aligned}

Now we have a matrix element that we know what to do with. Namely, {\langle {x''} \rvert} P {\lvert {x} \rangle} = -i \hbar \delta(x''-x) {\partial {}}/{\partial {x}}, which yields

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= -i \hbar \iint dx dx'' {\langle {x'} \rvert} P^{k-1} {\lvert {x''} \rangle} \delta(x''-x) \frac{\partial {}}{\partial {x}} \phi(x) \\ &= -i \hbar \int dx {\langle {x'} \rvert} P^{k-1} {\lvert {x} \rangle} \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

Each similar application of the identity operator brings down another -i\hbar and derivative yielding

\begin{aligned}\int dx {\langle {x'} \rvert} P^k {\lvert {x} \rangle} \phi(x)&= (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k}.\end{aligned}

Going back to our displacement operator matrix element, we now have

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=\phi(x')+\sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i \Delta x'}{\hbar} \right)^k (-i \hbar)^k \frac{\partial^k \phi(x')}{\partial {x'}^k} \\ &=\phi(x') +\sum_{k=1}^\infty \frac{1}{{k!}} \left( - \Delta x' \frac{\partial }{\partial x'} \right)^k  \phi(x') \\ &= \phi(x' - \Delta x').\end{aligned}

This shows nicely why the sign goes negative and it is no longer surprising when one observes that this can be obtained directly by using the adjoint relationship

\begin{aligned}{\langle {x'} \rvert} D(\Delta x') {\lvert {\phi} \rangle}&=(D^\dagger(\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &=(D(-\Delta x') {\lvert {x'} \rangle})^\dagger {\lvert {\phi} \rangle} \\ &={\lvert {x' - \Delta x'} \rangle}^\dagger {\lvert {\phi} \rangle} \\ &=\left\langle{{x' - \Delta x'}} \vert {{\phi}}\right\rangle \\ &=\phi(x' - \Delta x')\end{aligned}

That’s a whole lot easier than the integral manipulation, but at least shows that we now have a feel for the notation, and have confirmed the exponential formulation of the operator nicely.

Time evolution operator

The phrase “we identify time evolution with the Hamiltonian”. What a magic hat maneuver! Is there a way that this would be logical without already knowing the answer?

Dispersion delta function representation.

The Principle part notation here I found a bit unclear. He writes

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{(x'-x)}{(x'-x)^2 + \epsilon^2}= P\left( \frac{1}{{x' - x}} \right).\end{aligned}

In complex variables the principle part is the negative power series terms. For example for f(z) = \sum a_k z^k, the principle part is

\begin{aligned}\sum_{k = -\infty}^{-1} a_k z^k\end{aligned}

This doesn’t vanish at z = 0 as the principle part in this section is stated to. In (2.202) he pulls the P out of the integral, but I think the intention is really to keep this associated with the 1/(x'-x), as in

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\pi}} \int_0^\infty dx' \frac{f(x')}{x'-x - i \epsilon}= \frac{1}{{\pi}} \int_0^\infty dx' f(x') P\left( \frac{1}{{x' - x}} \right) + i f(x)\end{aligned}

Will this even have any relevance in this text?

Problems.

1. Cauchy-Schwartz identity.

We wish to find the value of \lambda that is just right to come up with the desired identity. The starting point is the expansion of the inner product

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

There is a trial and error approach to this problem, where one magically picks \lambda \propto \left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle^n, and figures out the proportionality constant and scale factor for the denominator to do the job. A nicer way is to set up the problem as an extreme value exercise. We can write this inner product as a function of \lambda, and proceed with setting the derivative equal to zero

\begin{aligned}f(\lambda) =\left\langle{{a}} \vert {{a}}\right\rangle + \lambda \lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \lambda \left\langle{{a}} \vert {{b}}\right\rangle + \lambda^{*} \left\langle{{b}} \vert {{a}}\right\rangle \\ \end{aligned}

Its derivative is

\begin{aligned}\frac{df}{d\lambda} &=\left(\lambda^{*} + \lambda \frac{d\lambda^{*}}{d\lambda}\right) \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle + \frac{d\lambda^{*}}{d\lambda} \left\langle{{b}} \vert {{a}}\right\rangle \\ &=\lambda^{*} \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{a}} \vert {{b}}\right\rangle +\frac{d\lambda^{*}}{d\lambda} \Bigl( \lambda \left\langle{{b}} \vert {{b}}\right\rangle + \left\langle{{b}} \vert {{a}}\right\rangle \Bigr)\end{aligned}

Now, we have a bit of a problem with d\lambda^{*}/d\lambda, since that doesn’t actually exist. However, that problem can be side stepped if we insist that the factor that multiplies it is zero. That provides a value for \lambda that also kills of the remainder of df/d\lambda. That value is

\begin{aligned}\lambda = - \frac{\left\langle{{b}} \vert {{a}}\right\rangle }{ \left\langle{{b}} \vert {{b}}\right\rangle  }.\end{aligned}

Back substitution yields

\begin{aligned}\braket{a + \lambda b}{a + \lambda b}&= \left\langle{{a}} \vert {{a}}\right\rangle - \left\langle{{a}} \vert {{b}}\right\rangle\left\langle{{b}} \vert {{a}}\right\rangle/\left\langle{{b}} \vert {{b}}\right\rangle \ge 0.\end{aligned}

This is easily rearranged to obtain the desired result:

\begin{aligned}\left\langle{{a}} \vert {{a}}\right\rangle \left\langle{{b}} \vert {{b}}\right\rangle \ge \left\langle{{b}} \vert {{a}}\right\rangle\left\langle{{a}} \vert {{b}}\right\rangle.\end{aligned}

2. Uncertainty relation.

The problem.

Using the Schwarz inequality of problem 1, and a symmetric and antisymmetric (anticommutator and commutator) sum of products that

\begin{aligned}{\left\lvert{\Delta A \Delta B}\right\rvert}^2 \ge  \frac{1}{{4}}{\left\lvert{ \left[{A},{B}\right]}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.1)

and that this result implies

\begin{aligned}\Delta x \Delta p \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.2)

The solution.

This problem seems somewhat misleading, since the Schwarz inequality appears to have nothing to do with showing 3.1, but only with the split of the operator product into symmetric and antisymmetric parts. Another possible tricky thing about this problem is that there is no mention of the anticommutator in the text at this point that I can find, so if one does not know what it is defined as, it must be figured out by context.

I’ve also had an interpretation problem with this since \Delta x \Delta p in 3.2 cannot mean the operators as is the case of 3.1. My assumption is that in 3.2 these deltas are really absolute expectation values, and that we really want to show

\begin{aligned}{\left\lvert{\left\langle{{\Delta X}}\right\rangle}\right\rvert} {\left\lvert{\left\langle{{\Delta P}}\right\rangle}\right\rvert} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.3)

However, I’m unable to demonstrate this. Instead I’m able to show two things:

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4} \\ {\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}\end{aligned}

Is one of these the result to be shown? Note that only the first of these required the Schwarz inequality. Also, it seems strange that we want the expectation of the operator \Delta X\Delta P?

Starting with the first part of the problem, note that we can factor any operator product into a linear combination of two Hermitian operators using the commutator and anticommutator. That is

\begin{aligned}C D &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2}}\left( C D - D C\right) \\ &= \frac{1}{{2}}\left( C D + D C\right) + \frac{1}{{2i}}\left( C D - D C\right) i \\ &\equiv \frac{1}{{2}}\left\{{C},{D}\right\}+\frac{1}{{2i}} \left[{C},{D}\right] i\end{aligned}

For Hermitian operators C, and D, using (CD)^\dagger = D^\dagger C^\dagger = D C, we can show that the two operator factors are Hermitian,

\begin{aligned}\left(\frac{1}{{2}}\left\{{C},{D}\right\}\right)^\dagger&= \frac{1}{{2}}\left( C D + D C\right)^\dagger \\ &= \frac{1}{{2}}\left( D^\dagger C^\dagger + C^\dagger D^\dagger\right) \\ &= \frac{1}{{2}}\left( D C + C D \right) \\ &= \frac{1}{{2}}\left\{{C},{D}\right\},\end{aligned}

\begin{aligned}\left(\frac{1}{{2}}\left[{C},{D}\right] i\right)^\dagger&= -\frac{i}{2} \left( C D - D C\right)^\dagger \\ &= -\frac{i}{2}\left( D^\dagger C^\dagger - C^\dagger D^\dagger\right) \\ &= -\frac{i}{2}\left( D C - C D \right) \\ &=\frac{1}{{2}}\left[{C},{D}\right] i\end{aligned}

So for the absolute squared value of the expectation of product of two operators we have

\begin{aligned}\left\langle{{C D }}\right\rangle^2&={\left\lvert{\left\langle{{\frac{1}{{2}}\left\{{C},{D}\right\} +\frac{1}{{2i}} \left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &={\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2.\end{aligned}

Now, these expectation values are real, given the fact that these operators are Hermitian. Suppose we write a = \left\langle{{\left\{{C},{D}\right\}}}\right\rangle/2, and b = \left\langle{{\left[{C},{D}\right]i}}\right\rangle/2, then we have

\begin{aligned}{\left\lvert{ \frac{1}{{2}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle +\frac{1}{{2i}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle }\right\rvert}^2&={\left\lvert{ a - b i }\right\rvert}^2 \\ &=( a - b i ) ( a + b i ) \\ &=a^2 + b^2\end{aligned}

So we have for the squared expectation value of the operator product C D

\begin{aligned}\left\langle{{C D }}\right\rangle^2 &=\frac{1}{{4}}\left\langle{{\left\{{C},{D}\right\}}}\right\rangle^2 +\frac{1}{{4}} \left\langle{{\left[{C},{D}\right] i}}\right\rangle^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right] i}}\right\rangle}\right\rvert}^2 \\ &=\frac{1}{{4}}{\left\lvert{\left\langle{{\left\{{C},{D}\right\}}}\right\rangle}\right\rvert}^2 +\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2 \\ &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{C},{D}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

With C = \Delta A, and D = \Delta B, this almost completes the first part of the problem. The remaining thing to note is that \left[{\Delta A},{\Delta B}\right] = \left[{A},{B}\right]. This last is straight forward to show

\begin{aligned}\left[{\Delta A},{\Delta B}\right] &=\left[{A - \left\langle{{A}}\right\rangle},{B - \left\langle{{B}}\right\rangle}\right]  \\ &=(A - \left\langle{{A}}\right\rangle)(B - \left\langle{{B}}\right\rangle)-(B - \left\langle{{B}}\right\rangle)(A - \left\langle{{A}}\right\rangle) \\ &=\left( A B - \left\langle{{A}}\right\rangleB- \left\langle{{B}}\right\rangleA+ \left\langle{{A}}\right\rangle\left\langle{{B}}\right\rangle \right)-\left( B A - \left\langle{{B}}\right\rangleA- \left\langle{{A}}\right\rangleB+ \left\langle{{B}}\right\rangle\left\langle{{A}}\right\rangle \right) \\ &=A B - B A  \\ &=\left[{A},{B}\right].\end{aligned}

Putting the pieces together we have

\begin{aligned}\left\langle{{\Delta A \Delta B }}\right\rangle^2 &\ge\frac{1}{{4}} {\left\lvert{\left\langle{{\left[{A},{B}\right]}}\right\rangle}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.4)

With expectation value implied by the absolute squared, this reproduces relation 3.1 as desired.

For the remaining part of the problem, with {\lvert {\alpha} \rangle} = \Delta A {\lvert {\psi} \rangle}, and {\lvert {\beta} \rangle} = \Delta B {\lvert {\psi} \rangle}, and noting that (\Delta A)^\dagger = \Delta A for Hermitian operator A (or B too in this case), the Schwartz inequality

\begin{aligned}\left\langle{{\alpha}} \vert {{\alpha}}\right\rangle\left\langle{{\beta}} \vert {{\beta}}\right\rangle &\ge {\left\lvert{\left\langle{{\beta}} \vert {{\alpha}}\right\rangle}\right\rvert}^2,\end{aligned} \hspace{\stretch{1}}(3.5)

takes the following form

\begin{aligned}{\langle {\psi} \rvert}(\Delta A)^\dagger \Delta A {\lvert {\psi} \rangle} {\langle {\psi} \rvert}(\Delta B)^\dagger B {\lvert {\psi} \rangle} &\ge {\left\lvert{{\langle {\psi} \rvert} (\Delta B)^\dagger A {\lvert {\psi} \rangle}}\right\rvert}^2.\end{aligned}

These are expectation values, and allow us to use 3.4 to show

\begin{aligned}\left\langle{{(\Delta A)^2 }}\right\rangle \left\langle{{(\Delta B)^2 }}\right\rangle&\ge {\left\lvert{ \left\langle{{\Delta B \Delta A }}\right\rangle }\right\rvert}^2 \\ &= \frac{1}{{4}} {\left\lvert{\left\langle{{\left[{B},{A}\right]}}\right\rangle}\right\rvert}^2.\end{aligned}

For A = X, and B = P, this is

\begin{aligned}\left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle&\ge \frac{\hbar^2}{4}\end{aligned} \hspace{\stretch{1}}(3.6)

Hmm. This doesn’t look like it is quite the result that I expected? We have \left\langle{{(\Delta X)^2 }}\right\rangle \left\langle{{(\Delta P)^2 }}\right\rangle instead of \left\langle{{\Delta X }}\right\rangle^2 \left\langle{{\Delta P}}\right\rangle^2?

Let’s step back slightly. Without introducing the Schwarz inequality the result 3.4 of the commutator manipulation, and \left[{X},{P}\right] = i \hbar gives us

\begin{aligned}\left\langle{{\Delta X \Delta P }}\right\rangle^2 &\ge\frac{\hbar^2}{4} ,\end{aligned}

and taking roots we have

\begin{aligned}{\left\lvert{\left\langle{{\Delta X \Delta P }}\right\rangle }\right\rvert}&\ge\frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.7)

Is this really what we were intended to show?

Attempting to answer this myself, I refer to [2], where I find he uses a loose notation for this too, and writes in his equation 3.36

\begin{aligned}(\Delta C)^2 = \left\langle{{ (C - \expectation{C})^2 }}\right\rangle = \left\langle{{C^2}}\right\rangle - \left\langle{{C}}\right\rangle^2\end{aligned}

This usage seems consistent with that, so I think that it is a reasonable assumption that uncertainty relation \Delta x \Delta p \ge \hbar/2 is really shorthand notation for the more cumbersome relation involving roots of the expectations of mean-square deviation operators

\begin{aligned}\sqrt{\left\langle{{ (X - \expectation{X})^2 }}\right\rangle}\sqrt{\left\langle{{ (P - \expectation{P})^2 }}\right\rangle} \ge \frac{\hbar}{2}.\end{aligned} \hspace{\stretch{1}}(3.8)

This is in fact what was proved arriving at 3.6.

Ah ha! Found it. Referring to equation 2.93 in the text, I see that a lower case notation \Delta x = \sqrt{(\Delta X)^2}, was introduced. This explains what seemed like ambiguous notation … it was just tricky notation, perfectly well explained, but done in passing in the text in a somewhat hidden seeming way.

3.

This problem done by inspection.

4.

TODO.

5. Hermitian radial differential operator.

Show that the operator

\begin{aligned}R = -i \hbar \frac{\partial {}}{\partial {r}},\end{aligned}

is not Hermitian, and find the constant a so that

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right),\end{aligned}

is Hermitian.

For the first part of the problem we can show that

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*} \ne {\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle}.\end{aligned}

For the RHS we have

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} R {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}}\end{aligned}

and for the LHS we have

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} R {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \frac{\partial {\hat{\boldsymbol{\phi}}^{*}}}{\partial {r}} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( 2 r \hat{\boldsymbol{\psi}} + r^2 \frac{\partial {r}}{\partial {\hat{\boldsymbol{\psi}}}} \right)\hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

So, unless r\hat{\boldsymbol{\psi}} = 0, the operator R is not Hermitian.

Moving on to finding the constant a such that T is Hermitian we calculate

\begin{aligned}\left( {\langle {\hat{\boldsymbol{\psi}}} \rvert} T {\lvert {\hat{\boldsymbol{\phi}}} \rangle} \right)^{*}&= i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\psi}} \left( \frac{\partial {}}{\partial {r}} + \frac{a}{r} \right) \hat{\boldsymbol{\phi}}^{*} \\ &= i \hbar \iiint dr d\theta d\phi \sin\theta \hat{\boldsymbol{\psi}} \left( r^2 \frac{\partial {}}{\partial {r}} + a r \right) \hat{\boldsymbol{\phi}}^{*} \\ &= -i \hbar \iiint dr d\theta d\phi \sin\theta \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + 2 r \hat{\boldsymbol{\psi}} - a r \hat{\boldsymbol{\psi}} \right) \hat{\boldsymbol{\phi}}^{*} \\ \end{aligned}

and

\begin{aligned}{\langle {\hat{\boldsymbol{\phi}}} \rvert} T {\lvert {\hat{\boldsymbol{\psi}}} \rangle} = -i \hbar \iiint dr d\theta d\phi r^2 \sin\theta \hat{\boldsymbol{\phi}}^{*} \left( r^2 \frac{\partial {\hat{\boldsymbol{\psi}}}}{\partial {r}} + a r \hat{\boldsymbol{\psi}} \right)\end{aligned}

So, for T to be Hermitian, we require

\begin{aligned}2 r - a r = a r.\end{aligned}

So a = 1, and our Hermitian operator is

\begin{aligned}T = -i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{r} \right).\end{aligned}

6. Radial directional derivative operator.

Problem.

Show that

\begin{aligned}D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p},\end{aligned}

is Hermitian. Expand this operator in spherical coordinates. Compare result to problem 5.

Solution.

Tackling the spherical coordinates expression of of the operator D, we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \boldsymbol{\nabla} \right) \Psi \\ &= \left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + \left( \boldsymbol{\nabla} \Psi \right) \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \left(\boldsymbol{\nabla} \Psi\right) \\ &=\left( \boldsymbol{\nabla} \cdot \hat{\mathbf{r}} \right) \Psi + 2 \hat{\mathbf{r}} \cdot \left( \boldsymbol{\nabla} \Psi \right).\end{aligned}

Here braces have been used to denote the extend of the operation of the gradient. In spherical polar coordinates, our gradient is

\begin{aligned}\boldsymbol{\nabla} \equiv \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}}+\hat{\boldsymbol{\theta}} \frac{1}{{r}} \frac{\partial {}}{\partial {\theta}}+\hat{\boldsymbol{\phi}} \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

This gets us most of the way there, and we have

\begin{aligned}\frac{1}{{-i\hbar}} D \Psi &=2 \frac{\partial {\Psi}}{\partial {r}} + \left( \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {r}}+\frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}+\frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\right) \Psi.\end{aligned}

Since {\partial {\hat{\mathbf{r}}}}/{\partial {r}} = 0, we are left with evaluating \hat{\boldsymbol{\theta}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\theta}}, and \hat{\boldsymbol{\phi}} \cdot {\partial {\hat{\mathbf{r}}}}/{\partial {\phi}}. To do so I chose to employ the (Geometric Algebra) exponential form of the spherical unit vectors [4]

\begin{aligned}I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_{2} \exp( I \mathbf{e}_3 \phi ) \\ \hat{\mathbf{r}} &= \mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta ) \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ).\end{aligned}

The partials of interest are then

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} &= \mathbf{e}_3 I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta ) = \hat{\boldsymbol{\theta}},\end{aligned}

and

\begin{aligned}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} &= \frac{\partial {}}{\partial {\phi}} \mathbf{e}_3 \left( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta \right) \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \\ &= \mathbf{e}_1 \mathbf{e}_2 \sin\theta \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= \sin\theta \hat{\boldsymbol{\phi}}.\end{aligned}

Only after computing these, did I find exactly these results for the partials of interest, in mathworld’s Spherical Coordinates page, which confirms these calculations. Note that a different angle convention is used there, so one has to exchange \phi, and \theta and the corresponding unit vector labels.

Substitution back into our expression for the operator we have

\begin{aligned}D &= - 2 i \hbar \left( \frac{\partial {}}{\partial {r}} + \frac{1}{{r}} \right),\end{aligned}

an operator that is exactly twice the operator of problem 5, already shown to be Hermitian. Since the constant numerical scaling of a Hermitian operator leaves it Hermitian, this shows that D is Hermitian as expected.

\hat{\boldsymbol{\theta}} directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\theta}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}}.\end{aligned}

This time we need the {\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\theta}}, {\partial {\hat{\boldsymbol{\theta}}}}/{\partial {\phi}} partials, which are

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\theta}} &=\mathbf{e}_1 \mathbf{e}_2 \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=-\mathbf{e}_3 \exp( I \hat{\boldsymbol{\phi}} \theta) \\ &=- \hat{\mathbf{r}}.\end{aligned}

This has no \hat{\boldsymbol{\theta}} component, so does not contribute to \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\theta}}. Noting that

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} &= -\mathbf{e}_1 \exp( I \mathbf{e}_3 \phi ) = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}},\end{aligned}

the \phi partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\mathbf{e}_1 \mathbf{e}_2 \left( \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \exp( I \hat{\boldsymbol{\phi}} \theta )+\hat{\boldsymbol{\phi}} I \sin\theta \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}} \right) \\ &=\hat{\boldsymbol{\phi}} \left( \exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}}\right),\end{aligned}

with \hat{\boldsymbol{\phi}} component

\begin{aligned}\hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\theta}}}}{\partial {\phi}} &=\left\langle{{\exp( I \hat{\boldsymbol{\phi}} \theta )+I \sin\theta \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}} }}\right\rangle \\ &=\cos\theta + \mathbf{e}_3 \cdot \hat{\boldsymbol{\phi}} \sin\theta \\ &=\cos\theta.\end{aligned}

Assembling the results, and labeling this operator \Theta we have

\begin{aligned}\Theta &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\theta}} + \hat{\boldsymbol{\theta}} \cdot \mathbf{p} \right)  \\ &=-i \hbar \frac{1}{{r}} \left( \frac{\partial {}}{\partial {\theta}} + \frac{1}{{2}} \cot\theta \right).\end{aligned}

It would be reasonable to expect this operator to also be Hermitian, and checking this explicitly by comparing
{\langle {\Phi} \rvert} \Theta {\lvert {\Psi} \rangle}^{*} and {\langle {\Psi} \rvert} \Theta {\lvert {\Phi} \rangle}, shows that this is in fact the case.

\hat{\boldsymbol{\phi}} directional momentum operator

Let’s try this for the other unit vector directions too. We also want

\begin{aligned}\left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla} \right) \Psi&=2 \hat{\boldsymbol{\phi}} \cdot (\boldsymbol{\nabla} \Psi) + \left( \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} \right) \Psi.\end{aligned}

The work consists of evaluating

\begin{aligned}\boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} &= \hat{\mathbf{r}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {r}}+ \frac{1}{{r}} \hat{\boldsymbol{\theta}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \cdot \frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\phi}}.\end{aligned}

This time we need the {\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\theta}}, {\partial {\hat{\boldsymbol{\phi}}}}/{\partial {\phi}} = \mathbf{e}_2 \mathbf{e}_1 \hat{\boldsymbol{\phi}} partials. The \theta partial is

\begin{aligned}\frac{\partial {\hat{\boldsymbol{\phi}}}}{\partial {\theta}} &=\frac{\partial {}}{\partial {\theta}} \mathbf{e}_2 \exp( I \mathbf{e}_3 \phi ) \\ &= 0.\end{aligned}

We conclude that \boldsymbol{\nabla} \cdot \hat{\boldsymbol{\phi}} = 0, and expect that we have one more Hermitian operator

\begin{aligned}\Phi &\equiv \frac{1}{{2}} \left( \mathbf{p} \cdot \hat{\boldsymbol{\phi}} + \hat{\boldsymbol{\phi}} \cdot \mathbf{p} \right)  \\ &=-i \hbar \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned}

It is simple to confirm that this is Hermitian since the integration by parts does not involve any of the volume element. In fact, any operator -i\hbar f(r,\theta) {\partial {}}/{\partial {\phi}} would also be Hermitian, including the simplest case -i\hbar {\partial {}}/{\partial {\phi}}. Have to dig out my Bohm text again, since I seem to recall that one used in the spherical Harmonics chapter.

A note on the Hermitian test and Dirac notation.

I’ve been a bit loose with my notation. I’ve stated that my demonstrations of the Hermitian nature have been done by showing

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} - {\langle {\psi} \rvert} A {\lvert {\phi} \rangle} = 0.\end{aligned}

However, what I’ve actually done is show that

\begin{aligned}\left( \int d^3 \mathbf{x} \phi^{*} (\mathbf{x}) A(\mathbf{x}) \psi(\mathbf{x}) \right)^{*} - \int d^3 \mathbf{x} \psi^{*} (\mathbf{x}) A(\mathbf{x}) \phi(\mathbf{x}) = 0.\end{aligned}

To justify this note that

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\left( \iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\phi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\psi}}\right\rangle \right)^{*} \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \phi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A^{*}(\mathbf{s}) \psi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \phi(\mathbf{r}) A^{*}(\mathbf{r}) \psi(\mathbf{r}),\end{aligned}

and

\begin{aligned}{\langle {\phi} \rvert} A {\lvert {\psi} \rangle}^{*} &=\iint d^3 \mathbf{r} d^3 \mathbf{s} \left\langle{{\psi}} \vert {\mathbf{r}}\right\rangle {\langle {\mathbf{r}} \rvert} A {\lvert {\mathbf{s}} \rangle} \left\langle{\mathbf{s}} \vert {{\phi}}\right\rangle \\ &=\iint d^3 \mathbf{r} d^3 \mathbf{s} {\langle {\mathbf{r}} \rvert} \psi(\mathbf{r}) \delta^3(\mathbf{r} - \mathbf{s}) A(\mathbf{s}) \phi(\mathbf{s}) \\ &=\int d^3 \mathbf{r} \psi(\mathbf{r}) A(\mathbf{r}) \phi(\mathbf{r}).\end{aligned}

Working backwards one sees that the comparison of the wave function integrals in explicit inner product notation is sufficient to demonstrate the Hermitian property.

7. Some commutators.

7. Problem.

For D in problem 6, obtain

\begin{itemize}
\item i) [D, x_i]
\item ii) [D, p_i]
\item iii) [D, L_i], where L_i = \mathbf{e}_i \cdot (\mathbf{r} \times \mathbf{p}).
\item iv) Show that e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} = e^\alpha x_i
\end{itemize}

7. Expansion of \left[{D},{x_i}\right].

While expressing the operator as D = -2 i \hbar (1/r) (1 + \partial_r) has less complexity than the D = \mathbf{p} \cdot \hat{\mathbf{r}} + \hat{\mathbf{r}} \cdot \mathbf{p}, since no operation on \hat{\mathbf{r}} is required, this doesn’t look particularly convenient for use with Cartesian coordinates. Slightly better perhaps is

\begin{aligned}D = -2 i\hbar \frac{1}{{r}}( \mathbf{r} \cdot \boldsymbol{\nabla} + 1)\end{aligned}

\begin{aligned}[D, x_i] \Psi&=D x_i \Psi - x_i D \Psi \\ &=-2 i \hbar \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \left( \mathbf{r} \cdot \boldsymbol{\nabla} + 1 \right) \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} x_i \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot (\boldsymbol{\nabla} x_i) \Psi-2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi+2 i \hbar x_i \frac{1}{{r}} \mathbf{r} \cdot \boldsymbol{\nabla} \Psi \\ &=-2 i \hbar \frac{1}{{r}} \mathbf{r} \cdot \mathbf{e}_i \Psi.\end{aligned}

So this first commutator is:

\begin{aligned}[D, x_i] = -2 i \hbar \frac{x_i}{r}.\end{aligned}

7. Alternate expansion of \left[{D},{x_i}\right].

Let’s try this instead completely in coordinate notation to verify. I’ll use implicit summation for repeated indexes, and write \partial_k = \partial/\partial x_k. A few intermediate results will be required

\begin{aligned}\partial_k \frac{1}{{r}} &= \partial_k (x_m x_m)^{-1/2}  \\ &= -\frac{1}{{2}} 2 x_k (x_m x_m)^{-3/2}  \\ \end{aligned}

Or

\begin{aligned}\partial_k \frac{1}{{r}} &= - \frac{x_k}{r^3}\end{aligned} \hspace{\stretch{1}}(3.9)

\begin{aligned}\partial_k \frac{x_i}{r}&=\frac{\delta_{ik}}{r} - \frac{ x_i }{r^3}\end{aligned} \hspace{\stretch{1}}(3.10)

\begin{aligned}\partial_k \frac{x_k}{r}&=\frac{3}{r} - \frac{ x_k }{r^3}\end{aligned} \hspace{\stretch{1}}(3.11)

The action of the momentum operators on the coordinates is

\begin{aligned}p_k x_i \Psi &=-i \hbar \partial_k x_i \Psi \\ &=-i \hbar \left( \delta_{ik} + x_i \partial_k \right) \Psi \\ &=-i \hbar \delta_{ik} + x_i p_k\end{aligned}

\begin{aligned}p_k x_k \Psi &=-i \hbar \partial_k x_k \Psi \\ &=-i \hbar \left( 3 + x_k \partial_k \right) \Psi\end{aligned}

Or

\begin{aligned}p_k x_i &= -i \hbar \delta_{ik} + x_i p_k \\ p_k x_k &= - 3 i \hbar + x_k p_k \end{aligned} \hspace{\stretch{1}}(3.12)

And finally

\begin{aligned}p_k \frac{1}{{r}} \Psi&=(p_k \frac{1}{{r}}) \Psi+ \frac{1}{{r}} p_k \Psi \\ &=-i \hbar \left( -\frac{x_k}{r^3}\right) \Psi+ \frac{1}{{r}} p_k \Psi \\ \end{aligned}

So

\begin{aligned}p_k \frac{1}{{r}} &= i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k\end{aligned} \hspace{\stretch{1}}(3.14)

We can use these to rewrite D

\begin{aligned}D &= p_k \frac{x_k}{r} + \frac{x_k}{r} p_k \\ &= p_k x_k \frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= \left( - 3 i \hbar + x_k p_k \right)\frac{1}{{r}} + \frac{x_k}{r} p_k \\ &= - \frac{3 i \hbar}{r} + x_k \left( i \hbar \frac{x_k}{r^3} + \frac{1}{{r}}p_k \right) + \frac{x_k}{r} p_k \\ \end{aligned}

\begin{aligned}D &= \frac{2}{r} ( -i \hbar + x_k p_k )\end{aligned} \hspace{\stretch{1}}(3.15)

This leaves us in the position to compute the commutator

\begin{aligned}\left[{D},{x_i}\right]&= \frac{2}{r} ( -i \hbar + x_k p_k ) x_i- \frac{2 x_i}{r} ( -i \hbar + x_k p_k ) \\ &= \frac{2}{r} x_k ( -i \hbar \delta_{ik} + x_i p_k )- \frac{2 x_i}{r} x_k p_k \\ &= -\frac{2 i \hbar x_i}{r} \end{aligned}

So, unless I’m doing something fundamentally wrong, the same way in both methods, this appears to be the desired result. I question my answer since utilizing this for the later computation of e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar} did not yield the expected answer.

7. [D, p_i]

\begin{aligned}\left[{D},{p_i}\right] &=-\frac{2 i \hbar }{r} ( 1 + x_k p_k ) p_i+2 i \hbar p_i \frac{1}{{r}} ( 1 + x_k p_k )  \\ &=-\frac{2 i \hbar }{r} \left(p_i + x_k p_k p_i-\left( i \hbar \frac{x_i}{r^2} + p_i \right) ( 1 + x_k p_k )  \right) \\ &=-\frac{2 i \hbar }{r} \left(x_k p_k p_i- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k -\left( -i \hbar \delta_{ki} + x_k p_i \right) p_k \right) \\ &=-\frac{2 i \hbar }{r} \left(- i \hbar \frac{x_i}{r^2} - i \hbar \frac{x_i x_k}{r^2} p_k + i \hbar p_i\right) \\ &=-\frac{i \hbar}{r} \left( \frac{x_i}{r} D+ 2 i \hbar p_i\right) \qquad\square\end{aligned}

If there is some significance to this expansion, other than to get a feel for operator manipulation, it escapes me.

7. [D, L_i]

To expand [D, L_i], it will be sufficient to consider any specific index i \in \{1,2,3\} and then utilize cyclic permutation of the indexes in the result to generalize. Let’s pick i=1, for which we have

\begin{aligned}L_1 = x_2 p_3 - x_3 p_2 \end{aligned}

It appears we will want to know

\begin{aligned}p_m D &=-2 i \hbar p_m \frac{1}{{r}} ( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m\right)( 1 + x_k p_k ) \\ &=-2 i \hbar \left(i \hbar \frac{x_m}{r^3} + \frac{1}{{r}}p_m+i \hbar \frac{x_m x_k }{r^3} p_k + \frac{1}{{r}}p_m x_k p_k \right) \\ &=-\frac{2 i \hbar}{r} \left(i \hbar \frac{x_m}{r^2} + p_m+i \hbar \frac{x_m x_k }{r^2} p_k -i \hbar p_m + x_k p_m p_k \right)\end{aligned}

and we also want

\begin{aligned}D x_m &=- \frac{2 i \hbar }{r} ( 1 + x_k p_k ) x_m  \\ &=- \frac{2 i \hbar }{r} ( x_m + x_k ( -i \hbar \delta_{km} + x_m p_k ) ) \\ &=- \frac{2 i \hbar }{r} ( x_m - i \hbar x_m + x_m x_k p_k ) \\ \end{aligned}

This also happens to be D x_m = x_m D + \frac{2 (i \hbar)^2 x_m }{r}, but does that help at all?

Assembling these we have

\begin{aligned}\left[{D},{L_1}\right]&=D x_2 p_3 - D x_3 p_2 - x_2 p_3 D + x_3 p_2 D \\ &=- \frac{2 i \hbar }{r} ( x_2 - i \hbar x_2 + x_2 x_k p_k ) p_3+ \frac{2 i \hbar }{r} ( x_3 - i \hbar x_3 + x_3 x_k p_k ) p_2  \\ &+\frac{2 i \hbar x_2 }{r} \left(i \hbar \frac{x_3}{r^2} + p_3+i \hbar \frac{x_3 x_k }{r^2} p_k -i \hbar p_3 + x_k p_3 p_k \right) \\ &-\frac{2 i \hbar x_3 }{r} \left(i \hbar \frac{x_2}{r^2} + p_2+i \hbar \frac{x_2 x_k }{r^2} p_k -i \hbar p_2 + x_k p_2 p_k \right) \\ \end{aligned}

With a bit of brute force it is simple enough to verify that all these terms mystically cancel out, leaving us zero

\begin{aligned}\left[{D},{L_1}\right] = 0\end{aligned}

There surely must be an easier way to demonstrate this. Likely utilizing the commutator relationships derived earlier.

7. e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}

We will need to evaluate D^k x_i. We have the first power from our commutator relation

\begin{aligned}D x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)\end{aligned}

A successive application of this operator therefore yields

\begin{aligned}D^2 x_i &= D x_i \left( D - \frac{ 2 i \hbar }{r} \right) \\ &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^2 \\ \end{aligned}

So we have

\begin{aligned}D^k x_i &= x_i \left( D - \frac{ 2 i \hbar }{r} \right)^k \\ \end{aligned}

This now preps us to expand the first product in the desired exponential sandwich

\begin{aligned}e^{i\alpha D/\hbar} x_i&=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i D}{\hbar} \right)^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k D^k x_i \\ &=x_i + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{i}{\hbar} \right)^k x_i  \\ &= x_i e^{ \frac{i \alpha }{\hbar} \left( D - \frac{ 2 i \hbar }{r} \right) } \\ &= x_i e^{ 2 \alpha /r } e^{ i \alpha D /\hbar }.\end{aligned}

The exponential sandwich then produces

\begin{aligned}e^{i\alpha D/\hbar} x_i e^{-i\alpha D/\hbar}  &= e^{2 \alpha/r } x_i \end{aligned}

Note that this isn’t the value we are supposed to get. Either my value for D x_i is off by a factor of 2/r or the problem in the text contains a typo.

8. Reduction of some commutators using the fundamental commutator relation.

Using the fundamental commutation relation

\begin{aligned}\left[{p},{x}\right] = -i \hbar,\end{aligned}

which we can also write as

\begin{aligned}p x = x p -i \hbar,\end{aligned}

expand \left[{x},{p^2}\right], \left[{x^2},{p}\right], and \left[{x^2},{p^2}\right].

The first is

\begin{aligned}\left[{x},{p^2}\right] &= x p^2 - p^2 x \\ &= x p^2 - p (p x) \\ &= x p^2 - p (x p -i \hbar) \\ &= x p^2 - (x p -i \hbar) p + i \hbar p \\ &= 2 i \hbar p \\ \end{aligned}

The second is

\begin{aligned}\left[{x^2},{p}\right] &= x^2 p - p x^2 \\ &= x^2 p - (x p - i\hbar) x \\ &= x^2 p - x (x p - i\hbar) + i \hbar x \\ &= 2 i \hbar x \\ \end{aligned}

Note that it is helpful for the last reduction of this problem to observe that we can write this as

\begin{aligned}p x^2 &= x^2 p - 2 i \hbar x \\ \end{aligned}

Finally for this last we have

\begin{aligned}\left[{x^2},{p^2}\right] &= x^2 p^2 - p^2 x^2 \\ &= x^2 p^2 - p (x^2 p - 2 i \hbar x) \\ &= x^2 p^2 - (x^2 p - 2 i \hbar x) p + 2 i \hbar (x p - i \hbar) \\ &= 4 i \hbar x p - 2 (i \hbar)^2 \\ \end{aligned}

That’s about as reduced as this can be made, but it is not very tidy looking. From this point we can simplify it a bit by factoring

\begin{aligned}\left[{x^2},{p^2}\right] &= 4 i \hbar x p - 2 (i \hbar)^2 \\ &= 2 i \hbar ( 2 x p - i \hbar) \\ &= 2 i \hbar ( x p + p x ) \\ &= 2 i \hbar \left\{{x},{p}\right\} \end{aligned}

9. Finite displacement operator.

9. Part I.

For

\begin{aligned}F(d) = e^{-i p d/\hbar},\end{aligned}

the first part of this problem is to show that

\begin{aligned}\left[{x},{F(d)}\right] = x F(d) - F(d) x = d F(d)\end{aligned}

We need to evaluate

\begin{aligned}e^{-i p d/\hbar} x = \sum_{k=0}^\infty \frac{1}{{k!}} \left( \frac{-i p d}{\hbar} \right)^k x.\end{aligned}

To do so requires a reduction of p^k x. For k=2 we have

\begin{aligned}p^2 x &= p ( x p - i\hbar ) \\ &= ( x p - i\hbar ) p - i \hbar p \\ &= x p^2 - 2 i\hbar p.\end{aligned}

For the cube we get p^3 x = x p^3 - 3 i\hbar p^2, supplying confirmation of an induction hypothesis p^k x = x p^k - k i\hbar p^{k-1}, which can be verified

\begin{aligned}p^{k+1} x &= p ( x p^k - k i \hbar p^{k-1}) \\ &= (x p - i\hbar) p^k - k i \hbar p^k \\ &= x p^{k+1} - (k+1) i \hbar p^k \qquad\square\end{aligned}

For our exponential we then have

\begin{aligned}e^{-i p d/\hbar} x &= x + \sum_{k=1}^\infty \frac{1}{{k!}} \left( \frac{-i d}{\hbar} \right)^k (x p^k - k i\hbar p^{k-1}) \\ &= x e^{-i p d /\hbar }+ \sum_{k=1}^\infty \frac{1}{{(k-1)!}} \left( \frac{-i p d}{\hbar} \right)^{k-1} (-i d/\hbar)(- i\hbar) \\ &= ( x - d ) e^{-i p d /\hbar }.\end{aligned}

Put back into our commutator we have

\begin{aligned}\left[{x},{e^{-i p d/\hbar}}\right] = d e^{-ip d/\hbar},\end{aligned}

completing the proof.

9. Part II.

For state {\lvert {\alpha} \rangle} with {\lvert {\alpha_d} \rangle} = F(d) {\lvert {\alpha} \rangle}, show that the expectation values satisfy

\begin{aligned}\left\langle{{X}}\right\rangle_d = \left\langle{{X}}\right\rangle + d\end{aligned}

\begin{aligned}\left\langle{{X}}\right\rangle_d &={\langle {\alpha_d} \rvert} X {\lvert {\alpha_d} \rangle} \\ &=\iint dx' dx'' \left\langle{{\alpha_d}} \vert {{x'}}\right\rangle {\langle {x'} \rvert} X {\lvert {x''} \rangle} \left\langle{{x''}} \vert {{\alpha_d}}\right\rangle \\ &=\iint dx' dx'' \alpha_d^{*}{x'} \delta(x' -x'') x' \alpha_d(x'') \\ &=\int dx' \alpha_d^{*}(x') x' \alpha_d(x') \\ \end{aligned}

But

\begin{aligned}\alpha_d(x') &= \exp\left( -\frac{i d }{\hbar} (-i\hbar) \frac{\partial}{\partial x'} \right) \alpha(x') \\ &= e^{- d \frac{\partial}{\partial x'} } \alpha(x') \\ &= \alpha(x' - d),\end{aligned}

so our position expectation is

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx' \alpha^{*}(x' -d) x' \alpha(x'- d).\end{aligned}

A change of variables x = x' -d gives us

\begin{aligned}\left\langle{{X}}\right\rangle_d &=\int dx \alpha^{*}(x) (x + d) \alpha(x) \\ \left\langle{{X}}\right\rangle + d+ d \int dx \alpha^{*}{x} \alpha(x) \qquad\square\end{aligned}

10. Hamiltonian position commutator and double commutator

For

\begin{aligned}H = \frac{1}{{2m}} p^2 + V(x)\end{aligned}

calculate \left[{H},{x}\right], and \left[{\left[{H},{x}\right]},{x}\right].

These are

\begin{aligned}\left[{H},{x}\right]&=\frac{1}{{2m}} p^2 x + V(x) x -\frac{1}{{2m}} x p^2 - x V(x)  \\ &=\frac{1}{{2m}} p ( x p - i \hbar) -\frac{1}{{2m}} x p^2 \\ &=\frac{1}{{2m}} \left( ( x p - i \hbar) p -i \hbar p \right) -\frac{1}{{2m}} x p^2 \\ &=-\frac{i\hbar p}{m} \\  \end{aligned}

and

\begin{aligned}\left[{\left[{H},{x}\right]},{x}\right]&=-\frac{i\hbar }{m} \left[{p},{x}\right] \\ &=\frac{(-i\hbar)^2 }{m} \\ &=-\frac{\hbar^2 }{m} \\ \end{aligned}

We also have to show that

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 = \frac{\hbar^2}{2m}\end{aligned}

Expanding the absolute value in terms of conjugates we have

\begin{aligned}\sum_k (E_k -E_n) {\left\lvert{ {\langle {k} \rvert} x {\lvert {n} \rangle} }\right\rvert}^2 &= \sum_k (E_k -E_n) {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {k} \rvert} x {\lvert {n} \rangle} {\langle {n} \rvert} x E_k {\lvert {k} \rangle} -{\langle {k} \rvert} x E_n {\lvert {n} \rangle} {\langle {n} \rvert} x {\lvert {k} \rangle} \\ &= \sum_k {\langle {n} \rvert} x H {\lvert {k} \rangle} {\langle {k} \rvert} x {\lvert {n} \rangle} - {\langle {n} \rvert} x {\lvert {k} \rangle} {\langle {k} \rvert} x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x H x {\lvert {n} \rangle} - {\langle {n} \rvert} x x H {\lvert {n} \rangle} \\ &= {\langle {n} \rvert} x \left[{H},{x}\right] {\lvert {n} \rangle}  \\ &= -\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle}  \\ \end{aligned}

It is not obvious where to go from here. Taking the clue from the problem that the result involves the double commutator, we have

\begin{aligned}- \frac{\hbar^2}{m}&={\langle {n} \rvert} \left[{\left[{H},{x}\right]},{x}\right] {\lvert {n} \rangle} \\ &={\langle {n} \rvert} H x^2 - 2 x H x + x^2 H {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} x H x {\lvert {n} \rangle} \\ &=2 E_n {\langle {n} \rvert} x^2 {\lvert { n} \rangle} - 2 {\langle {n} \rvert} ( -\left[{H},{x}\right] + H x) x {\lvert {n} \rangle} \\ &=2 {\langle {n} \rvert} \left[{H},{x}\right] x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} p x {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p - i \hbar {\lvert {n} \rangle} \\ &=-\frac{2 i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} +\frac{2 (i \hbar)^2}{m} \end{aligned}

So, somewhat flukily by working backwards, with a last rearrangement, we now have

\begin{aligned}-\frac{i \hbar}{m} {\langle {n} \rvert} x p {\lvert {n} \rangle} &= \frac{\hbar^2}{m} -\frac{\hbar^2}{2 m} \\ &= \frac{\hbar^2}{2 m}\end{aligned}

Substitution above gives the desired result. This is extremely ugly, and doesn’t follow any sort of logical progression. Is there a good way to sequence this proof?

11. Another double commutator.

Attempt 1. Incomplete.

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{r})\end{aligned}

use \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] to obtain

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

First evaluate the commutators. The first is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right]\end{aligned}

The Laplacian applied to this exponential is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } \Psi&=\partial_m \partial_m e^{i k_n x_n } \Psi \\ &=\partial_m (i k_m e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + e^{i \mathbf{k} \cdot \mathbf{r} } \partial_m \Psi ) \\ &=- \mathbf{k}^2 e^{i \mathbf{k}\cdot \mathbf{r}} \Psi + i e^{i \mathbf{k} \cdot \mathbf{r} } \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ i e^{i \mathbf{k} \cdot \mathbf{r}} \mathbf{k} \cdot \boldsymbol{\nabla} \Psi+ e^{i \mathbf{k} \cdot \mathbf{r}} \boldsymbol{\nabla}^2 \Psi\end{aligned}

Factoring out the exponentials this is

\begin{aligned}\boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r} } &=e^{i \mathbf{k}\cdot \mathbf{r}} \left(- \mathbf{k}^2 + 2 i \mathbf{k} \cdot \boldsymbol{\nabla} + \boldsymbol{\nabla}^2 \right),\end{aligned}

and in terms of \mathbf{p}, we have

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k}\cdot \mathbf{r}} &= e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} + \mathbf{p}^2 \right)=e^{i \mathbf{k}\cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2\end{aligned}

So, finally, our first commutator is

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}}e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right)\end{aligned} \hspace{\stretch{1}}(3.16)

The double commutator is then

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i\mathbf{k} \cdot \mathbf{r}} \frac{\hbar \mathbf{k}}{m} \cdot \left( \mathbf{p} e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} \mathbf{p} \right)\end{aligned}

To simplify this we want

\begin{aligned}\mathbf{k} \cdot \boldsymbol{\nabla} e^{-i \mathbf{k} \cdot \mathbf{r}} \Psi &=k_n \partial_n e^{-i k_m x_m } \Psi \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} }\left(k_n (-i k_n) \Psi + k_n \partial_n \Psi \right) \\ &=e^{-i \mathbf{k} \cdot \mathbf{r} } \left( -i \mathbf{k}^2 + \mathbf{k} \cdot \boldsymbol{\nabla} \right) \Psi\end{aligned}

The double commutator is then left with just

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=- \frac{1}{{m}} (\hbar \mathbf{k})^2 \end{aligned} \hspace{\stretch{1}}(3.17)

Now, returning to the energy expression

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2&=\sum_n (E_n - E_s) {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=\sum_n {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} H e^{i\mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} -{\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i\mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle} \\ &={\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} \left[{H},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} e^{-i\mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k}\cdot \mathbf{r}} \left((\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} \right){\lvert {s} \rangle} \\ &=\frac{1}{{2m}} {\langle {s} \rvert} (\hbar\mathbf{k})^2 + 2 (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ &=\frac{(\hbar\mathbf{k})^2}{2m} + \frac{1}{{m}} {\langle {s} \rvert} (\hbar \mathbf{k}) \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

I can’t figure out what to do with the \hbar \mathbf{k} \cdot \mathbf{p} expectation, and keep going around in circles.

I figure there is some trick related to the double commutator, so expanding the expectation of that seems appropriate

\begin{aligned}-\frac{1}{{m}} (\hbar \mathbf{k})^2 &={\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle} \\ &={\langle {s} \rvert} \left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right] e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}}\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]  {\lvert {s} \rangle} \\ &=\frac{1}{{2m }} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-e^{-i \mathbf{k} \cdot \mathbf{r}} e^{ i \mathbf{k} \cdot \mathbf{r}} ( (\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p})  {\lvert {s} \rangle} \\ &=\frac{1}{{m}} {\langle {s} \rvert} e^{ i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-\hbar \mathbf{k} \cdot \mathbf{p} {\lvert {s} \rangle} \\ \end{aligned}

Attempt 2.

I was going in circles above. With the help of betel on physicsforums, I got pointed in the right direction. Here’s a rework of this problem from scratch, also benefiting from hindsight.

Our starting point is the same, with the evaluation of the first commutator

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} \left[{\mathbf{p}^2},{e^{i\mathbf{k} \cdot \mathbf{r}}}\right].\end{aligned} \hspace{\stretch{1}}(3.18)

To continue we need to know how the momentum operator acts on an exponential of this form

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} \Psi&=-i \hbar \mathbf{e}_m \partial_m e^{\pm i k_n x_n } \Psi \\ &=e^{\pm i \mathbf{k} \cdot \mathbf{r}} \left( -i \hbar (\pm i \mathbf{e}_m k_m ) \Psi -i \hbar \mathbf{e}_m \partial_m \Psi\right).\end{aligned}

This gives us the helpful relationship

\begin{aligned}\mathbf{p} e^{\pm i \mathbf{k} \cdot \mathbf{r}} = e^{\pm i \mathbf{k} \cdot \mathbf{r}} (\mathbf{p} \pm \hbar \mathbf{k}).\end{aligned} \hspace{\stretch{1}}(3.19)

Squared application of the momentum operator on the positive exponential found in the first commutator 3.18, gives us

\begin{aligned}\mathbf{p}^2 e^{i \mathbf{k} \cdot \mathbf{r}} = e^{i \mathbf{k} \cdot \mathbf{r}} (\hbar \mathbf{k} + \mathbf{p})^2 = e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p} + \mathbf{p}^2),\end{aligned} \hspace{\stretch{1}}(3.20)

with which we can evaluate this first commutator.

\begin{aligned}\left[{H},{ e^{i \mathbf{k} \cdot \mathbf{r}}}\right]&= \frac{1}{{2m}} e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}).\end{aligned} \hspace{\stretch{1}}(3.21)

For the double commutator we have

\begin{aligned}2m \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&=e^{i \mathbf{k} \cdot \mathbf{r}} ((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-((\hbar \mathbf{k})^2 + 2 \hbar \mathbf{k} \cdot \mathbf{p})  \\ &=e^{i \mathbf{k} \cdot \mathbf{r}} 2 (\hbar \mathbf{k} \cdot \mathbf{p}) e^{-i \mathbf{k} \cdot \mathbf{r}}-2 \hbar \mathbf{k} \cdot \mathbf{p} \\ &=2 \hbar \mathbf{k} \cdot (\mathbf{p} - \hbar \mathbf{k})-2 \hbar \mathbf{k} \cdot \mathbf{p},\end{aligned}

so for the double commutator we have just a scalar

\begin{aligned}\left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right]&= -\frac{(\hbar \mathbf{k})^2}{m}.\end{aligned} \hspace{\stretch{1}}(3.22)

Now consider the expectation of this double commutator, expanded with some unintuitive steps that have been motivated by working backwards

\begin{aligned}{\langle {s} \rvert} \left[{\left[{H},{e^{i \mathbf{k} \cdot \mathbf{r}}}\right]},{e^{-i \mathbf{k} \cdot \mathbf{r}}}\right] {\lvert {s} \rangle}&={\langle {s} \rvert} 2 H - e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} - e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} e^{i \mathbf{k} \cdot \mathbf{r}} H- 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H {\lvert {s} \rangle}- {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} H {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n E_s {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}- E_n {\langle {s} \rvert} e^{-i \mathbf{k} \cdot \mathbf{r}} {\lvert {n} \rangle} {\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} \\ &=2 \sum_n (E_s - E_n){\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2\end{aligned}

By 3.22, we have completed the problem

\begin{aligned}\sum_n (E_n - E_s) {\left\lvert{{\langle {n} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}}\right\rvert}^2 &= \frac{(\hbar \mathbf{k})^2}{2m}.\end{aligned} \hspace{\stretch{1}}(3.23)

There is one subtlety above that is worth explicit mention before moving on, in particular, I did not find it intuitive that the following was true

\begin{aligned}{\langle {s} \rvert} e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}} + e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle} &={\langle {s} \rvert} 2 e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} {\lvert {s} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.24)

However, observe that both of these exponential sandwich operators e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}, and e^{-i \mathbf{k} \cdot \mathbf{r}} H e^{i \mathbf{k} \cdot \mathbf{r}} are Hermitian, since we have for example

\begin{aligned}(e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger&= (e^{-i \mathbf{k} \cdot \mathbf{r}})^\dagger H^\dagger (e^{i \mathbf{k} \cdot \mathbf{r}})^\dagger \\ &= e^{i \mathbf{k} \cdot \mathbf{r}} H e^{-i \mathbf{k} \cdot \mathbf{r}}\end{aligned}

Also observe that these operators are both complex conjugates of each other, and with \mathbf{k} \cdot \mathbf{r} = \alpha for short, can be written

\begin{aligned}e^{i \alpha} H e^{-i \alpha}&= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha+ i\sin\alpha H \cos \alpha -i \cos\alpha H \sin\alpha \\ e^{-i \alpha} H e^{i \alpha} &= \cos\alpha H \cos \alpha + \sin\alpha H \sin\alpha- i\sin\alpha H \cos \alpha +i \cos\alpha H \sin\alpha\end{aligned} \hspace{\stretch{1}}(3.25)

Because H is real valued, and the expectation value of a Hermitian operator is real valued, none of the imaginary terms can contribute to the expectation values, and in the summation of 3.24 we can thus pick and double either of the exponential sandwich terms, as desired.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[4] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf.

About these ads

One Response to “Desai Chapter II notes and problems.”

  1. PJDS said

    Hello, I agree with you that there is a typo in exercise 2.7. Note that it is easy to see because the dimension is exp(alpha) is wrong, alpha being of dimenion “length”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 43 other followers

%d bloggers like this: