Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘schrodinger equation’

PHY456H1F: Quantum Mechanics II. Lecture 12 (Taught by Mr. Federico Duque Gomez). WKB Method

Posted by peeterjoot on October 21, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.

WKB (Wentzel-Kramers-Brillouin) Method.

This is covered in section 24 in the text [1]. Also section 8 of [2].

We start with the 1D time independent Schr\”{o}dinger equation

\begin{aligned}-\frac{\hbar^2}{2m} \frac{d^2 U}{dx^2} + V(x) U(x) = E U(x)\end{aligned} \hspace{\stretch{1}}(2.1)

which we can write as

\begin{aligned}\frac{d^2 U}{dx^2} + \frac{2m}{\hbar^2} (E - V(x)) U(x) = 0\end{aligned} \hspace{\stretch{1}}(2.2)

Consider a finite well potential as in figure (\ref{fig:qmTwoL13:qmTwoL12fig1})

\caption{Finite well potential}


\begin{aligned}k &= \frac{2m (E - V)}{\hbar},\qquad E > V \\ \kappa &= \frac{2m (V - E)}{\hbar}, \qquad V > E,\end{aligned} \hspace{\stretch{1}}(2.3)

we have for a bound state within the well

\begin{aligned}U \propto e^{\pm i k x}\end{aligned} \hspace{\stretch{1}}(2.5)

and for that state outside the well

\begin{aligned}U \propto e^{\pm \kappa x}\end{aligned} \hspace{\stretch{1}}(2.6)

In general we can hope for something similar. Let’s look for that something, but allow the constants k and \kappa to be functions of position

\begin{aligned}k^2(x) &= \frac{2m (E - V(x))}{\hbar},\qquad E > V \\ \kappa^2(x) &= \frac{2m (V(x) - E)}{\hbar}, \qquad V > E.\end{aligned} \hspace{\stretch{1}}(2.7)

In terms of k Schr\”{o}dinger’s equation is just

\begin{aligned}\frac{d^2 U(x)}{dx^2} + k^2(x) U(x) = 0.\end{aligned} \hspace{\stretch{1}}(2.9)

We use the trial solution

\begin{aligned}U(x) = A e^{i \phi(x)},\end{aligned} \hspace{\stretch{1}}(2.10)

allowing \phi(x) to be complex

\begin{aligned}\phi(x) = \phi_R(x) + i \phi_I(x).\end{aligned} \hspace{\stretch{1}}(2.11)

We need second derivatives

\begin{aligned}(e^{i \phi})'' &=(i \phi' e^{i \phi})'  \\ &=(i \phi')^2 e^{i \phi} + i \phi'' e^{i \phi},\end{aligned}

and plug back into our Schr\”{o}dinger equation to obtain

\begin{aligned}- (\phi'(x))^2 + i \phi''(x) + k^2(x) = 0.\end{aligned} \hspace{\stretch{1}}(2.12)

For the first round of approximation we assume

\begin{aligned}\phi''(x) \approx 0,\end{aligned} \hspace{\stretch{1}}(2.13)

and obtain

\begin{aligned}(\phi'(x))^2 = k^2(x),\end{aligned} \hspace{\stretch{1}}(2.14)


\begin{aligned}\phi'(x) = \pm k(x).\end{aligned} \hspace{\stretch{1}}(2.15)

A second round of approximation we use 2.15 and obtain

\begin{aligned}\phi''(x) = \pm k'(x)\end{aligned} \hspace{\stretch{1}}(2.16)

Plugging back into 2.12 we have

\begin{aligned}-(\phi'(x))^2 \pm i k'(x) + k^2(x) = 0,\end{aligned} \hspace{\stretch{1}}(2.17)


\begin{aligned}\begin{aligned}\phi'(x) &= \pm \sqrt{ \pm i k'(x) + k^2(x) } \\ &= \pm k(x) \sqrt{ 1 \pm i \frac{k'(x)}{k^2(x)} } .\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.18)

If k' is small compared to k^2

\begin{aligned}\frac{k'(x)}{k^2(x)} \ll 1, \end{aligned} \hspace{\stretch{1}}(2.19)

we have

\begin{aligned}\phi'(x) = \pm k(x) \pm i \frac{k'(x)}{2 k(x)}  \end{aligned} \hspace{\stretch{1}}(2.20)


\begin{aligned}\phi(x) &= \pm \int dx k(x) \pm i \int dx \frac{k'(x)}{2 k(x)}  + \text{const} \\ &= \pm \int dx k(x) \pm i \frac{1}{{2}} \ln k(x) + \text{const} \end{aligned}

Going back to our wavefunction, if E > V(x) we have

\begin{aligned}U(x) &\sim A e^{i \phi(x)} \\ &= \exp \left(i\left( \pm \int dx k(x) \pm i \frac{1}{{2}} \ln k(x) + \text{const} \right)\right) \\ &\sim \exp \left(i\left( \pm \int dx k(x) \pm i \frac{1}{{2}} \ln k(x) \right)\right) \\ &= e^{\pm i \int dx k(x)} e^{\mp \frac{1}{{2}} \ln k(x)} \\ \end{aligned}


\begin{aligned}U(x) \propto \frac{1}{{\sqrt{k(x)}}} e^{\pm i \int dx k(x)} \end{aligned} \hspace{\stretch{1}}(2.21)

FIXME: Question: the \pm on the real exponential got absorbed here, but would not U(x) \propto \sqrt{k(x)} e^{\pm i \int dx k(x)} also be a solution? If so, why is that one excluded?

Similarly for the E < V(x) case we can find

\begin{aligned}U(x) \propto \frac{1}{{\sqrt{\kappa(x)}}} e^{\pm i \int dx \kappa(x)}.\end{aligned} \hspace{\stretch{1}}(2.22)

\item V(x) changes very slowly \implies k'(x) small, and k(x) = \sqrt{2 m (E - V(x))}/\hbar.
\item E very far away from the potential {\left\lvert{(E - V(x))/V(x)}\right\rvert} \gg 1.


\caption{Example of a general potential}

\caption{Turning points where WKB won’t work}

\caption{Diagram for patching method discussion}

WKB won’t work at the turning points in this figure since our main assumption was that

\begin{aligned}{\left\lvert{\frac{k'(x)}{k^2(x)}}\right\rvert} \ll 1\end{aligned} \hspace{\stretch{1}}(3.23)

so we get into trouble where k(x) \sim 0. There are some methods for dealing with this. Our text as well as Griffiths give some examples, but they require Bessel functions and more complex mathematics.

The idea is that one finds the WKB solution in the regions of validity, and then looks for a polynomial solution in the patching region where we are closer to the turning point, probably requiring lookup of various special functions.

This power series method is also outlined in [3], where solutions to connect the regions are expressed in terms of Airy functions.


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] D.J. Griffiths. Introduction to quantum mechanics, volume 1. Pearson Prentice Hall, 2005.

[3] Wikipedia. Wkb approximation — wikipedia, the free encyclopedia, 2011. [Online; accessed 19-October-2011].

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 5.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted after seeing the comments from the grading. [Click here for the PDF for the original submission, as found below, uncorrected.



A particle of mass m moves along the x-direction such that V(X)=\frac{1}{{2}}KX^2. Is the state

\begin{aligned}u(\xi) = B \xi e^{+\xi^2/2},\end{aligned} \hspace{\stretch{1}}(2.1)

where \xi is given by Eq. (9.60), B is a constant, and time t=0, an energy eigenstate of the system? What is probability per unit length for measuring the particle at position x=0 at t=t_0>0? Explain the physical meaning of the above results.


Is this state an energy eigenstate?

Recall that \xi = \alpha x, \alpha = \sqrt{m\omega/\hbar}, and K = m \omega^2. With this variable substitution Schr\”{o}dinger’s equation for this harmonic oscillator potential takes the form

\begin{aligned}\frac{d^2 u}{d\xi^2} - \xi^2 u = \frac{2 E }{\hbar\omega} u\end{aligned} \hspace{\stretch{1}}(2.2)

While we can blindly substitute a function of the form \xi e^{\xi^2/2} into this to get

\begin{aligned}\frac{1}{{B}} \left(\frac{d^2 u}{d\xi^2} - \xi^2 u\right)&=\frac{d}{d\xi} \left( 1 + \xi^2 \right) e^{\xi^2/2} - \xi^3 e^{\xi^2/2} \\ &=\left( 2 \xi + \xi + \xi^3 \right) e^{\xi^2/2} - \xi^3 e^{\xi^2/2} \\ &=3 \xi e^{\xi^2/2}\end{aligned}

and formally make the identification E = 3 \omega \hbar/2 = (1 + 1/2) \omega \hbar, this isn’t a normalizable wavefunction, and has no physical relevance, unless we set B = 0.

By changing the problem, this state could be physically relevant. We’d require a potential of the form

\begin{aligned}V(x) =\left\{\begin{array}{l l}f(x) & \quad \mbox{if latex x < a$} \\ \frac{1}{{2}} K x^2 & \quad \mbox{if $latex a < x b$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

For example, f(x) = V_1, g(x) = V_2, for constant V_1, V_2. For such a potential, within the harmonic well, a general solution of the form

\begin{aligned}u(x,t) = \sum_n H_n(\xi) \Bigl(A_n e^{-\xi^2/2} + B_n e^{\xi^2/2} \Bigr) e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.4)

is possible since normalization would not prohibit non-zero B_n values in that situation. For the wave function to be a physically relevant, we require it to be (absolute) square integrable, and must also integrate to unity over the entire interval.

Probability per unit length at x=0.

We cannot answer the question for the probability that the particle is found at the specific x=0 position at t=t_0 (that probability is zero in a continuous space), but we can answer the question for the probability that a particle is found in an interval surrounding a specific point at this time. By calculating the average of the probability to find the particle in an interval, and dividing by that interval’s length, we arrive at plausible definition of probability per unit length for an interval surrounding x = x_0

\begin{aligned}P = \text{Probability per unit length near latex x = x_0$} =\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 – \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.5)$

By this definition, the probability per unit length is just the probability density itself, evaluated at the point of interest.

Physically, for an interval small enough that the probability density is constant in magnitude over that interval, this probability per unit length times the length of this small interval, represents the probability that we will find the particle in that interval.

Probability per unit length for the non-normalizable state given.

It seems possible, albeit odd, that this question is asking for the probability per unit length for the non-normalizable E_1 wavefunction 2.1. Since normalization requires B=0, that probability density is simply zero (or undefined, depending on one’s point of view).

Probability per unit length for some more interesting harmonic oscillator states.

Suppose we form the wavefunction for a superposition of all the normalizable states

\begin{aligned}u(x,t) = \sum_n A_n H_n(\xi) e^{-\xi^2/2} e^{-i E_n t/\hbar}\end{aligned} \hspace{\stretch{1}}(2.6)

Here it is assumed that the A_n coefficients yield unit probability

\begin{aligned}\int {\left\lvert{u(x,0)}\right\rvert}^2 dx = \sum_n {\left\lvert{A_n}\right\rvert}^2 = 1\end{aligned} \hspace{\stretch{1}}(2.7)

For the impure state of 2.6 we have for the probability density

\begin{aligned}{\left\lvert{u}\right\rvert}^2&=\sum_{m,n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m \ne n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m \ne n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m < n}H_n(\xi) H_m(\xi)\left(A_n A_m^{*}e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar}+A_m A_n^{*}e^{-\xi^2} e^{-i (E_m - E_n)t_0/\hbar}\right) \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+2 \sum_{m < n}H_n(\xi) H_m(\xi)e^{-\xi^2}\Re \left(A_n A_m^{*}e^{-i (E_n - E_m)t_0/\hbar}\right) \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}  \\ &\quad+2 \sum_{m < n}H_n(\xi) H_m(\xi)e^{-\xi^2}\left(\Re ( A_n A_m^{*} ) \cos( (n - m)\omega t_0)+\Im ( A_n A_m^{*} ) \sin( (n - m)\omega t_0)\right) \\ \end{aligned}

Evaluation at the point x = 0, we have

\begin{aligned}{\left\lvert{u(0,t_0)}\right\rvert}^2=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(0))^2 +2 \sum_{m < n} H_n(0) H_m(0) \left( \Re ( A_n A_m^{*} ) \cos( (n - m)\omega t_0) +\Im ( A_n A_m^{*} ) \sin( (n - m)\omega t_0)\right)\end{aligned} \hspace{\stretch{1}}(2.8)

It is interesting that the probability per unit length only has time dependence for a mixed state.

For a pure state and its wavefunction u(x,t) = N_n H_n(\xi) e^{-\xi^2/2} e^{-i E_n t/\hbar} we have just

\begin{aligned}{\left\lvert{u(0,t_0)}\right\rvert}^2=N_n^2 (H_n(0))^2 = \frac{\alpha}{\sqrt{\pi} 2^n n!} H_n(0)^2\end{aligned} \hspace{\stretch{1}}(2.9)

This is zero for odd n. For even n is appears that (H_n(0))^2 may equal 2^n (this is true at least up to n=4). If that’s the case, we have for non-mixed states, with even numbered energy quantum numbers, at x=0 a probability per unit length value of {\left\lvert{u(0,t_0)}\right\rvert}^2 = \frac{\alpha}{\sqrt{\pi} n!}.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 4.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted with some grading commentary. [Click here for the PDF for the original submission, as found below.

Problem 1.


Is it possible to derive the eigenvalues and eigenvectors presented in Section 8.2 from those in Section 8.1.2? What does this say about the potential energy operator in these two situations?

For reference 8.1.2 was a finite potential barrier, V(x) = V_0, {\left\lvert{x}\right\rvert} > a, and zero in the interior of the well. This had trigonometric solutions in the interior, and died off exponentially past the boundary of the well.

On the other hand, 8.2 was a delta function potential V(x) = -g \delta(x), which had the solution u(x) = \sqrt{\beta} e^{-\beta {\left\lvert{x}\right\rvert}}, where \beta = m g/\hbar^2.


The pair of figures in the text [1] for these potentials doesn’t make it clear that there are possibly any similarities. The attractive delta function potential isn’t illustrated (although the delta function is, but with opposite sign), and the scaling and the reference energy levels are different. Let’s illustrate these using the same reference energy level and sign conventions to make the similarities more obvious.

\caption{8.1.2 Finite Well potential (with energy shifted downwards by V_0)}

\caption{8.2 Delta function potential.}

The physics isn’t changed by picking a different point for the reference energy level, so let’s compare the two potentials, and their solutions using V(x) = 0 outside of the well for both cases. The method used to solve the finite well problem in the text is hard to follow, so re-doing this from scratch in a slightly tidier way doesn’t hurt.

Schr\”{o}dinger’s equation for the finite well, in the {\left\lvert{x}\right\rvert} > a region is

\begin{aligned}-\frac{\hbar^2}{2m} u'' = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.1)

where a positive bound state energy E_B = -E > 0 has been introduced.


\begin{aligned}\beta = \sqrt{\frac{2 m E_B}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(2.2)

the wave functions outside of the well are

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} &\quad \mbox{latex x a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

Within the well Schr\”{o}dinger’s equation is

\begin{aligned}-\frac{\hbar^2}{2m} u'' - V_0 u = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.4)


\begin{aligned}\frac{\hbar^2}{2m} u'' = - \frac{2m}{\hbar^2} (V_0 - E_B) u,\end{aligned} \hspace{\stretch{1}}(2.5)

Noting that the bound state energies are the E_B < V_0 values, let \alpha^2 = 2m (V_0 - E_B)/\hbar^2, so that the solutions are of the form

\begin{aligned}u(x) = A e^{i\alpha x} + B e^{-i\alpha x}.\end{aligned} \hspace{\stretch{1}}(2.6)

As was done for the wave functions outside of the well, the normalization constants can be expressed in terms of the values of the wave functions on the boundary. That provides a pair of equations to solve

\begin{aligned}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}=\begin{bmatrix}e^{i \alpha a} & e^{-i \alpha a} \\ e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7)

Inverting this and substitution back into 2.6 yields

\begin{aligned}u(x) &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix} \\ &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\frac{1}{{e^{2 i \alpha a} - e^{-2 i \alpha a}}}\begin{bmatrix}e^{i \alpha a} & -e^{-i \alpha a} \\ -e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix} \\ &=\begin{bmatrix}\frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} &\frac{\sin(\alpha (a - x))}{\sin(2 \alpha a)}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}.\end{aligned}

Expanding the last of these matrix products the wave function is close to completely specified.

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} & \quad \mbox{latex x < -a$} \\ u(a) \frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} +u(-a) \frac{\sin(\alpha (a – x))}{\sin(2 \alpha a)} & \quad \mbox{$latex {\left\lvert{x}\right\rvert} a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.8)$

There are still two unspecified constants u(\pm a) and the constraints on E_B have not been determined (both \alpha and \beta are functions of that energy level). It should be possible to eliminate at least one of the u(\pm a) by computing the wavefunction normalization, and since the well is being narrowed the \alpha term will not be relevant. Since only the vanishingly narrow case where a \rightarrow 0, x \in [-a,a] is of interest, the wave function in that interval approaches

\begin{aligned}u(x) \rightarrow \frac{1}{{2}} (u(a) + u(-a)) + \frac{x}{2} ( u(a) - u(-a) ) \rightarrow \frac{1}{{2}} (u(a) + u(-a)).\end{aligned} \hspace{\stretch{1}}(2.9)

Since no discontinuity is expected this is just u(a) = u(-a). Let’s write \lim_{a\rightarrow 0} u(a) = A for short, and the limited width well wave function becomes

\begin{aligned}u(x) =\left\{\begin{array}{l l}A e^{\beta x} & \quad \mbox{latex x 0$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.10)$

This is now the same form as the delta function potential, and normalization also gives A = \sqrt{\beta}.

One task remains before the attractive delta function potential can be considered a limiting case for the finite well, since the relation between a, V_0, and g has not been established. To do so integrate the Schr\”{o}dinger equation over the infinitesimal range [-a,a]. This was done in the text for the delta function potential, and that provided the relation

\begin{aligned}\beta = \frac{mg}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.11)

For the finite well this is

\begin{aligned}\int_{-a}^a -\frac{\hbar^2}{2m} u'' - V_0 \int_{-a}^a u = -E_B \int_{-a}^a u \\ \end{aligned} \hspace{\stretch{1}}(2.12)

In the limit as a \rightarrow 0 this is

\begin{aligned}\frac{\hbar^2}{2m} (u'(a) - u'(-a)) + V_0 2 a u(0) = 2 E_B a u(0).\end{aligned} \hspace{\stretch{1}}(2.13)

Some care is required with the V_0 a term since a \rightarrow 0 as V_0 \rightarrow \infty, but the E_B term is unambiguously killed, leaving

\begin{aligned}\frac{\hbar^2}{2m} u(0) (-2\beta e^{-\beta a}) = -V_0 2 a u(0).\end{aligned} \hspace{\stretch{1}}(2.14)

The exponential vanishes in the limit and leaves

\begin{aligned}\beta = \frac{m (2 a) V_0}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.15)

Comparing to 2.11 from the attractive delta function completes the problem. The conclusion is that when the finite well is narrowed with a \rightarrow 0, also letting V_0 \rightarrow \infty such that the absolute area of the well g = (2 a) V_0 is maintained, the finite potential well produces exactly the attractive delta function wave function and associated bound state energy.

Problem 2.


For the hydrogen atom, determine {\langle {nlm} \rvert}(1/R){\lvert {nlm} \rangle} and 1/{\langle {nlm} \rvert}R{\lvert {nlm} \rangle} such that (nlm)=(211) and R is the radial position operator (X^2+Y^2+Z^2)^{1/2}. What do these quantities represent physically and are they the same?


Both of the computation tasks for the hydrogen like atom require expansion of a braket of the following form

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle},\end{aligned} \hspace{\stretch{1}}(3.16)

where A(R) = R = (X^2 + Y^2 + Z^2)^{1/2} or A(R) = 1/R.

The spherical representation of the identity resolution is required to convert this braket into integral form

\begin{aligned}\mathbf{1} = \int r^2 \sin\theta dr d\theta d\phi {\lvert { r \theta \phi} \rangle}{\langle { r \theta \phi} \rvert},\end{aligned} \hspace{\stretch{1}}(3.17)

where the spherical wave function is given by the braket \left\langle{{ r \theta \phi}} \vert {{nlm}}\right\rangle = R_{nl}(r) Y_{lm}(\theta,\phi).

Additionally, the radial form of the delta function will be required, which is

\begin{aligned}\delta(\mathbf{x} - \mathbf{x}') = \frac{1}{{r^2 \sin\theta}} \delta(r - r') \delta(\theta - \theta') \delta(\phi - \phi')\end{aligned} \hspace{\stretch{1}}(3.18)

Two applications of the identity operator to the braket yield

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &={\langle {nlm} \rvert} \mathbf{1} A(R) \mathbf{1} {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' \left\langle{{nlm}} \vert {{ r \theta \phi}}\right\rangle{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}\left\langle{{ r' \theta' \phi'}} \vert {{nlm}}\right\rangle \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi){\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

To continue an assumption about the matrix element {\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} is required. It seems reasonable that this would be

\begin{aligned}{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} = \\ \delta(\mathbf{x} - \mathbf{x}') A(r) = \frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r).\end{aligned} \hspace{\stretch{1}}(3.19)

The braket can now be written completely in integral form as

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ &=\int dr d\theta d\phi {r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

Application of the delta functions then reduces the integral, since the only \theta, and \phi dependence is in the (orthonormal) Y_{lm} terms they are found to drop out

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}&=\int dr d\theta d\phi r^2 \sin\theta R_{nl}(r) Y_{lm}^{*}(\theta, \phi)A(r)R_{nl}(r) Y_{lm}(\theta, \phi) \\ &=\int dr r^2 R_{nl}(r) A(r)R_{nl}(r) \underbrace{\int\sin\theta d\theta d\phi Y_{lm}^{*}(\theta, \phi)Y_{lm}(\theta, \phi) }_{=1}\\ \end{aligned}

This leaves just the radial wave functions in the integral

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}=\int dr r^2 R_{nl}^2(r) A(r)\end{aligned} \hspace{\stretch{1}}(3.20)

As a consistency check, observe that with A(r) = 1, this integral evaluates to 1 according to equation (8.274) in the text, so we can think of (r R_{nl}(r))^2 as the radial probability density for functions of r.

The problem asks specifically for these expectation values for the {\lvert {211} \rangle} state. For that state the radial wavefunction is found in (8.277) as

\begin{aligned}R_{21}(r) = \left(\frac{Z}{2a_0}\right)^{3/2} \frac{ Z r }{a_0 \sqrt{3}} e^{-Z r/2 a_0}\end{aligned} \hspace{\stretch{1}}(3.21)

The braket can now be written explicitly

\begin{aligned}{\langle {21m} \rvert} A(R) {\lvert {21m} \rangle}=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^4 e^{-Z r/ a_0}A(r)\end{aligned} \hspace{\stretch{1}}(3.22)

Now, let’s consider the two functions A(r) separately. First for A(r) = r we have

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^5 e^{-Z r/ a_0} \\ &=\frac{ a_0 }{ 24 Z } \int_0^\inftydu u^5 e^{-u} \\ \end{aligned}

The last integral evaluates to 120, leaving

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ 5 a_0 }{ Z }.\end{aligned} \hspace{\stretch{1}}(3.23)

The expectation value associated with this {\lvert {21m} \rangle} state for the radial position is found to be proportional to the Bohr radius. For the hydrogen atom where Z=1 this average value for repeated measurements of the physical quantity associated with the operator R is found to be 5 times the Bohr radius for n=2, l=1 states.

Our problem actually asks for the inverse of this expectation value, and for reference this is

\begin{aligned}1/ {\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ Z }{ 5 a_0 } \end{aligned} \hspace{\stretch{1}}(3.24)

Performing the same task for A(R) = 1/R

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^3e^{-Z r/ a_0} \\ &=\frac{1}{{24}} \frac{ Z }{ a_0 } \int_0^\inftydu u^3e^{-u}.\end{aligned}

This last integral has value 6, and we have the second part of the computational task complete

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = \frac{1}{{4}} \frac{ Z }{ a_0 } \end{aligned} \hspace{\stretch{1}}(3.25)

The question of whether or not 3.24, and 3.25 are equal is answered. They are not.

Still remaining for this problem is the question of the what these quantities represent physically.

The quantity {\langle {nlm} \rvert} R {\lvert {nlm} \rangle} is the expectation value for the radial position of the particle measured from the center of mass of the system. This is the average outcome for many measurements of this radial distance when the system is prepared in the state {\lvert {nlm} \rangle} prior to each measurement.

Interestingly, the physical quantity that we associate with the operator R has a different measurable value than the inverse of the expectation value for the inverted operator 1/R. Regardless, we have a physical (observable) quantity associated with the operator 1/R, and when the system is prepared in state {\lvert {21m} \rangle} prior to each measurement, the average outcome of many measurements of this physical quantity produces this value {\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = Z/n^2 a_0, a quantity inversely proportional to the Bohr radius.

ASIDE: Comparing to the general case.

As a confirmation of the results obtained, we can check 3.24, and 3.25 against the general form of the expectation values \left\langle{{R^s}}\right\rangle for various powers s of the radial position operator. These can be found in locations such as which gives for Z=1 (without proof), and in [2] (where these and harder looking ones expectation values are left as an exercise for the reader to prove). Both of those give:

\begin{aligned}\left\langle{{R}}\right\rangle &= \frac{a_0}{2} ( 3 n^2 -l (l+1) ) \\ \left\langle{{1/R}}\right\rangle &= \frac{1}{n^2 a_0} \end{aligned} \hspace{\stretch{1}}(3.26)

It is curious to me that the general expectation values noted in 3.26 we have a l quantum number dependence for \left\langle{{R}}\right\rangle, but only the n quantum number dependence for \left\langle{{1/R}}\right\rangle. It is not obvious to me why this would be the case.


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 11 notes. Harmonic Oscillator.

Posted by peeterjoot on November 30, 2010

[Click here for a PDF of this post with nicer formatting]


Why study this problem?

It is relevant to describing the oscillation of molecules, quantum states of light, vibrations of the lattice structure of a solid, and so on.

FIXME: projected picture of masses on springs, with a ladle shaped well, approximately Harmonic about the minimum of the bucket.

The problem to solve is the one dimensional Hamiltonian

\begin{aligned}V(X) &= \frac{1}{{2}} K X^2 \\ K &= m \omega^2 \\ H &= \frac{P^2}{2m} + V(X)\end{aligned} \hspace{\stretch{1}}(8.168)

where m is the mass, \omega is the frequency, X is the position operator, and P is the momentum operator. Of these quantities, \omega and m are classical quantities.

This problem can be used to illustrate some of the reasons why we study the different pictures (Heisenberg, Interaction and Schr\”{o}dinger). This is a problem well suited to all of these (FIXME: lookup an example of this with the interaction picture. The book covers H and S methods.

We attack this with a non-intuitive, but cool technique. Introduce the raising a^\dagger and lowering a operators:

\begin{aligned}a &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X + i \frac{P}{m\omega} \right) \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X - i \frac{P}{m\omega} \right)\end{aligned} \hspace{\stretch{1}}(8.171)

\paragraph{Question:} are we using the dagger for more than Hermitian conjugation in this case.
\paragraph{Answer:} No, this is precisely the Hermitian conjugation operation.

Solving for X and P in terms of a and a^\dagger, we have

\begin{aligned}a + a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 X  \\ a - a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 i \frac{P }{m \omega}\end{aligned}


\begin{aligned}X &= \sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \\ P &= i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\end{aligned} \hspace{\stretch{1}}(8.173)

Express H in terms of a and a^\dagger

\begin{aligned}H &= \frac{P^2}{2m} + \frac{1}{{2}} K X^2  \\ &= \frac{1}{2m} \left(i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\right)^2+ \frac{1}{{2}} m \omega^2\left(\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \right)^2 \\ &= \frac{-\hbar \omega}{4} \left(a^\dagger a^\dagger + a^2 - a a^\dagger - a^\dagger a\right)+ \frac{\hbar \omega}{4}\left(a^\dagger a^\dagger + a^2 + a a^\dagger + a^\dagger a\right) \\ \end{aligned}

\begin{aligned}H= \frac{\hbar \omega}{2} \left(a a^\dagger + a^\dagger a\right) = \frac{\hbar \omega}{2} \left(2 a^\dagger a + \left[{a},{a^\dagger}\right]\right) \end{aligned} \hspace{\stretch{1}}(8.175)

Since \left[{X},{P}\right] = i \hbar \mathbf{1} then we can show that \left[{a},{a^\dagger}\right] = \mathbf{1}. Solve for \left[{a},{a^\dagger}\right] as follows

\begin{aligned}i \hbar &=\left[{X},{P}\right] \\ &=\left[{\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) },{i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)}\right] \\ &=\sqrt{\frac{\hbar}{2 m \omega}} i \sqrt{\frac{\hbar m \omega}{2}} \left[{a^\dagger + a},{a^\dagger -a}\right] \\ &= \frac{i \hbar}{2}\left(\left[{a^\dagger},{a^\dagger}\right] -\left[{a^\dagger},{a}\right] +\left[{a},{a^\dagger}\right] -\left[{a},{a}\right] \right)  \\ &= \frac{i \hbar}{2}\left(0+2 \left[{a},{a^\dagger}\right] -0\right)\end{aligned}

Comparing LHS and RHS we have as stated

\begin{aligned}\left[{a},{a^\dagger}\right] = \mathbf{1}\end{aligned} \hspace{\stretch{1}}(8.176)

and thus from 8.175 we have

\begin{aligned}H = \hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right)\end{aligned} \hspace{\stretch{1}}(8.177)

Let {\lvert {n} \rangle} be the eigenstate of H so that H{\lvert {n} \rangle} = E_n {\lvert {n} \rangle}. From 8.177 we have

\begin{aligned}H {\lvert {n} \rangle} =\hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.178)


\begin{aligned}a^\dagger a {\lvert {n} \rangle} + \frac{{\lvert {n} \rangle}}{2} = \frac{E_n}{\hbar \omega} {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.179)

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.180)

We wish now to find the eigenstates of the “Number” operator a^\dagger a, which are simultaneously eigenstates of the Hamiltonian operator.

Observe that we have

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger ( a a^\dagger {\lvert {n} \rangle} ) \\ &= a^\dagger ( \mathbf{1} + a^\dagger a ) {\lvert {n} \rangle}\end{aligned}

where we used \left[{a},{a^\dagger}\right] = a a^\dagger - a^\dagger a = \mathbf{1}.

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger \left( \mathbf{1} + \frac{E_n}{\hbar\omega} - \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle} \\ &= a^\dagger \left( \frac{E_n}{\hbar\omega} + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle},\end{aligned}


\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) = (\lambda_n + 1) (a^\dagger {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.181)

The new state a^\dagger {\lvert {n} \rangle} is presumed to lie in the same space, expressible as a linear combination of the basis states in this space. We can see the effect of the operator a a^\dagger on this new state, we find that the energy is changed, but the state is otherwise unchanged. Any state a^\dagger {\lvert {n} \rangle} is an eigenstate of a^\dagger a, and therefore also an eigenstate of the Hamiltonian.

Play the same game and win big by discovering that

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} ) = (\lambda_n -1) (a {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.182)

There will be some state {\lvert {0} \rangle} such that

\begin{aligned}a {\lvert {0} \rangle} = 0 {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(8.183)

which implies

\begin{aligned}a^\dagger (a {\lvert {0} \rangle}) = (a^\dagger a) {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(8.184)

so from 8.180 we have

\begin{aligned}\lambda_0 = 0\end{aligned} \hspace{\stretch{1}}(8.185)

Observe that we can identify \lambda_n = n for

\begin{aligned}\lambda_n = \left( \frac{E_n}{\hbar\omega} - \frac{1}{{2}} \right) = n,\end{aligned} \hspace{\stretch{1}}(8.186)


\begin{aligned}\frac{E_n}{\hbar\omega} = n + \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(8.187)


\begin{aligned}E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(8.188)

where n = 0, 1, 2, \cdots.

We can write

\begin{aligned}\hbar \omega \left( a^\dagger a + \frac{1}{{2}} \mathbf{1} \right) {\lvert {n} \rangle} &= E_n {\lvert {n} \rangle} \\ a^\dagger a {\lvert {n} \rangle} + \frac{1}{{2}} {\lvert {n} \rangle} &= \frac{E_n}{\hbar \omega} {\lvert {n} \rangle} \\ \end{aligned}


\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.189)

We call this operator a^\dagger a = N, the number operator, so that

\begin{aligned}N {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.190)

Relating states.

Recall the calculation we performed for

\begin{aligned}L_{+} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m+1} \rangle} \\ L_{-} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.191)

Where C_{+}, and C_{+} are constants. The next game we are going to play is to work out C_n for the lowering operation

\begin{aligned}a{\lvert {n} \rangle} = C_n {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.193)

and the raising operation

\begin{aligned}a^\dagger {\lvert {n} \rangle} = B_n {\lvert {n+1} \rangle}.\end{aligned} \hspace{\stretch{1}}(9.194)

For the Hermitian conjugate of a {\lvert {n} \rangle} we have

\begin{aligned}(a {\lvert {n} \rangle})^\dagger = ( C_n {\lvert {n-1} \rangle} )^\dagger = C_n^{*} {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.195)


\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = C_n C_n^{*} \left\langle{{n-1}} \vert {{n-1}}\right\rangle = {\left\lvert{C_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.196)

Expanding the LHS we have

\begin{aligned}{\left\lvert{C_n}\right\rvert}^2 &={\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} \\ &={\langle {n} \rvert} n {\lvert {n} \rangle} \\ &=n \left\langle{{n}} \vert {{n}}\right\rangle \\ &=n \end{aligned}


\begin{aligned}C_n = \sqrt{n}\end{aligned} \hspace{\stretch{1}}(9.197)


\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = B_n B_n^{*} \left\langle{{n+1}} \vert {{n+1}}\right\rangle = {\left\lvert{B_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.198)


\begin{aligned}{\left\lvert{B_n}\right\rvert}^2 &={\langle {n} \rvert} \underbrace{a a^\dagger}_{a a^\dagger - a^\dagger a = \mathbf{1}} {\lvert {n} \rangle} \\ &={\langle {n} \rvert} \left( \mathbf{1} + a^\dagger a \right) {\lvert {n} \rangle} \\ &=(1 + n) \left\langle{{n}} \vert {{n}}\right\rangle \\ &=1 + n \end{aligned}


\begin{aligned}B_n = \sqrt{n + 1}\end{aligned} \hspace{\stretch{1}}(9.199)

Heisenberg picture.

\paragraph{How does the lowering operator a evolve in time?}

\paragraph{A:} Recall that for a general operator A, we have for the time evolution of that operator

\begin{aligned}i \hbar \frac{d A}{dt} = \left[{ A },{H}\right]\end{aligned} \hspace{\stretch{1}}(10.200)

Let’s solve this one.

\begin{aligned}i \hbar \frac{d a}{dt} &= \left[{ a },{H}\right] \\ &= \left[{ a },{ \hbar \omega (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ a^\dagger a }\right] \\ &= \hbar\omega \left( a a^\dagger a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a a^\dagger) a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a^\dagger a + \mathbf{1}) a - a^\dagger a a \right) \\ &= \hbar\omega a \end{aligned}

Even though a is an operator, it can undergo a time evolution and we can think of it as a function, and we can solve for a in the differential equation

\begin{aligned}\frac{d a}{dt} = -i \omega a \end{aligned} \hspace{\stretch{1}}(10.201)

This has the solution

\begin{aligned}a = a(0) e^{-i \omega t}\end{aligned} \hspace{\stretch{1}}(10.202)

here a(0) is an operator, the value of that operator at t = 0. The exponential here is just a scalar (not effected by the operator so we can put it on either side of the operator as desired).


\begin{aligned}a' = a(0) \frac{d}{dt} e^{-i \omega t} = a(0) (-i \omega) e^{-i \omega t} = -i \omega a\end{aligned} \hspace{\stretch{1}}(10.203)

A couple comments on the Schr\”{o}dinger picture.

We don’t do this in class, but it is very similar to the approach of the hydrogen atom. See the text for full details.

In the Schr\”{o}dinger picture,

\begin{aligned}-\frac{\hbar^2}{2m} \frac{d^2 u}{dx^2} + \frac{1}{{2}} m \omega^2 x^2 u = E u\end{aligned} \hspace{\stretch{1}}(11.204)

This does directly to the wave function representation, but we can relate these by noting that we get this as a consequence of the identification u = u(x) = \left\langle{{x}} \vert {{u}}\right\rangle.

In 11.204, we can switch to dimensionless quantities with

\begin{aligned}\xi = \text{``xi (z)''} = \alpha x\end{aligned} \hspace{\stretch{1}}(11.205)


\begin{aligned}\alpha = \sqrt{\frac{m \omega}{\hbar}}\end{aligned} \hspace{\stretch{1}}(11.206)

This gives, with \lambda = 2E/\hbar\omega,

\begin{aligned}\frac{d^2 u}{d\xi^2} + (\lambda - \xi^2) u = 0\end{aligned} \hspace{\stretch{1}}(11.207)

We can use polynomial series expansion methods to solve this, and find that we require a terminating expression, and write this in terms of the Hermite polynomials (courtesy of the clever French once again).

When all is said and done we will get the energy eigenvalues once again

\begin{aligned}E = E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(11.208)

Back to the Heisenberg picture.

Let us express

\begin{aligned}\left\langle{{x}} \vert {{n}}\right\rangle = u_n(x)\end{aligned} \hspace{\stretch{1}}(12.209)


\begin{aligned}a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(12.210)

we have

\begin{aligned}0  =\left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle},\end{aligned} \hspace{\stretch{1}}(12.211)


\begin{aligned}0 &= {\langle {x} \rvert} \left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle} \\ &= {\langle {x} \rvert} X {\lvert {0 } \rangle} + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ &= x \left\langle{{x}} \vert {{0}}\right\rangle + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ \end{aligned}

Recall that our matrix operator is

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\end{aligned} \hspace{\stretch{1}}(12.212)

\begin{aligned}{\langle {x} \rvert} P {\lvert {0} \rangle} &={\langle {x} \rvert} P \underbrace{\int {\lvert {x'} \rangle} {\langle {x'} \rvert} dx' }_{= \mathbf{1}}{\lvert {0} \rangle} \\ &=\int {\langle {x} \rvert} P {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\int \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\left( -i \hbar \frac{d}{dx} \right)\left\langle{{x}} \vert {{0}}\right\rangle\end{aligned}

We have then

\begin{aligned}0 =x u_0(x) + \frac{\hbar}{m \omega} \frac{d u_0(x)}{dx}\end{aligned} \hspace{\stretch{1}}(12.213)

NOTE: picture of the solution to this LDE on slide…. but I didn’t look closely enough.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

Notes and problems for Desai chapter IV.

Posted by peeterjoot on October 12, 2010

[Click here for a PDF of this post with nicer formatting]


Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\item Antisymmetric tensor summation identity.

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab} = \delta_{ja} \delta_{kb} - \delta_{jb}\delta_{ka}\end{aligned} \hspace{\stretch{1}}(1.1)

This is obviously the coordinate equivalent of the dot product of two bivectors

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b) &=( (\mathbf{e}_j \wedge \mathbf{e}_k) \cdot \mathbf{e}_a ) \cdot \mathbf{e}_b) =\delta_{ka}\delta_{jb} - \delta_{ja}\delta_{kb}\end{aligned} \hspace{\stretch{1}}(1.2)

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b)&= \sum_{ie} \left\langle{{\epsilon_{ijk} \mathbf{e}_j \mathbf{e}_k \epsilon_{eab} \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{(\mathbf{e}_i \mathbf{e}_i) \mathbf{e}_j \mathbf{e}_k (\mathbf{e}_e \mathbf{e}_e) \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{\mathbf{e}_i \mathbf{e}_e I^2}}\right\rangle \\ &=-\sum_{ie} \epsilon_{ijk} \epsilon_{eab} \delta_{ie} \\ &=-\sum_i\epsilon_{ijk} \epsilon_{iab}\qquad\square\end{aligned}

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi(L_{-} Y_{lm})^\dagger L_{-} Y_{lm} \sin\theta\end{aligned}

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi L_{-}^{*} Y_{lm}^{*}L_{-} Y_{lm} \sin\theta\end{aligned}

How does he end up with the L_{-} and the Y_{lm}^{*} interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators L_{\pm}, where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the L_{-} commutation with \mathbf{L}^2 implies this?



Problem 1.


Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

Cartesian case.

For the Cartesian coordinates case we have

\begin{aligned}H = -\frac{\hbar^2}{2m} (\partial_{xx} + \partial_{yy}) = i \hbar \partial_t\end{aligned} \hspace{\stretch{1}}(2.3)

Application of separation of variables with \Psi = XYT gives

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{X''}{X} +\frac{Y''}{Y} \right) = i \hbar \frac{T'}{T} = E .\end{aligned} \hspace{\stretch{1}}(2.4)

Immediately, we have the time dependence

\begin{aligned}T \propto e^{-i E t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.5)

with the PDE reduced to

\begin{aligned}\frac{X''}{X} +\frac{Y''}{Y} = - \frac{2m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

Introducing separate independent constants

\begin{aligned}\frac{X''}{X} &= a^2 \\ \frac{Y''}{Y} &= b^2 \end{aligned} \hspace{\stretch{1}}(2.7)

provides the pre-normalized wave function and the constraints on the constants

\begin{aligned}\Psi &= C e^{ax}e^{by}e^{-iE t/\hbar} \\ a^2 + b^2 &= -\frac{2 m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.9)

Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

\begin{aligned}e^{ax} &= e^{a(x + \lambda_x)} \\ e^{ay} &= e^{a(y + \lambda_y)} ,\end{aligned} \hspace{\stretch{1}}(2.11)


\begin{aligned}a\lambda_x &= 2 \pi i m \\ a\lambda_y &= 2 \pi i n.\end{aligned} \hspace{\stretch{1}}(2.13)

This provides a more explicit form for the energy expression

\begin{aligned}E_{mn} &= \frac{1}{{2m}} 4 \pi^2 \hbar^2 \left( \frac{m^2}{{\lambda_x}^2}+\frac{n^2}{{\lambda_y}^2}\right).\end{aligned} \hspace{\stretch{1}}(2.15)

We can also add in the area normalization using

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{x=0}^{\lambda_x} dx\int_{y=0}^{\lambda_x} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.16)

Our eigenfunctions are now completely specified

\begin{aligned}u_{mn}(x,y,t) &= \frac{1}{{\sqrt{\lambda_x \lambda_y}}}e^{2 \pi i x/\lambda_x}e^{2 \pi i y/\lambda_y}e^{-iE t/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.17)

The interesting thing about this solution is that we can make arbitrary linear combinations

\begin{aligned}f(x,y) = a_{mn} u_{mn}\end{aligned} \hspace{\stretch{1}}(2.18)

and then “solve” for a_{mn}, for an arbitrary f(x,y) by taking inner products

\begin{aligned}a_{mn} = \left\langle{{u_mn}} \vert {{f}}\right\rangle =\int_{x=0}^{\lambda_x} dx \int_{y=0}^{\lambda_x} dy f(x,y) u_mn^{*}(x,y).\end{aligned} \hspace{\stretch{1}}(2.19)

This gives the appearance that any function f(x,y) is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions f(x,y), but the equality really means that the RHS will be the periodic extension of f(x,y).

Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

\begin{aligned}\frac{2 \pi m }{\lambda_x} &= k_x \\ \frac{2 \pi n }{\lambda_y} &= k_y \end{aligned} \hspace{\stretch{1}}(2.20)

Our inner product is now

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^{\infty} dx\int_{\infty}^{\infty} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.22)

And the corresponding normalized wavefunction and associated energy constant E are

\begin{aligned}u_{\mathbf{k}}(x,y,t) &= \frac{1}{{2\pi}}e^{i k_x x}e^{i k_y y}e^{-iE t/\hbar} = \frac{1}{{2\pi}}e^{i \mathbf{k} \cdot \mathbf{x}}e^{-iE t/\hbar} \\ E &= \frac{\hbar^2 \mathbf{k}^2 }{2m}\end{aligned} \hspace{\stretch{1}}(2.23)

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

Polar case.

In polar coordinates our gradient is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta.\end{aligned} \hspace{\stretch{1}}(2.25)


\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} .\end{aligned} \hspace{\stretch{1}}(2.26)

Squaring the gradient for the Laplacian we’ll need the partials, which are

\begin{aligned}\partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= -\hat{\mathbf{r}}.\end{aligned}

The Laplacian is therefore

\begin{aligned}\boldsymbol{\nabla}^2 &= (\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \cdot(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \hat{\mathbf{r}} \partial_r \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\mathbf{r}}) \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\boldsymbol{\theta}}) \frac{1}{{r}} \partial_\theta .\end{aligned}

Evalating the derivatives we have

\begin{aligned}\boldsymbol{\nabla}^2 = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{r^2} \partial_{\theta\theta},\end{aligned} \hspace{\stretch{1}}(2.28)

and are now prepared to move on to the solution of the Hamiltonian H = -(\hbar^2/2m) \boldsymbol{\nabla}^2. With separation of variables again using \Psi = R(r) \Theta(\theta) T(t) we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{R''}{R} + \frac{R'}{rR} + \frac{1}{{r^2}} \frac{\Theta''}{\Theta} \right) = i \hbar \frac{T'}{T} = E.\end{aligned} \hspace{\stretch{1}}(2.29)

Rearranging to separate the \Theta term we have

\begin{aligned}\frac{r^2 R''}{R} + \frac{r R'}{R} + \frac{2 m E}{\hbar^2} r^2 E = -\frac{\Theta''}{\Theta} = \lambda^2.\end{aligned} \hspace{\stretch{1}}(2.30)

The angular solutions are given by

\begin{aligned}\Theta = \frac{1}{{\sqrt{2\pi}}} e^{i \lambda \theta}\end{aligned} \hspace{\stretch{1}}(2.31)

Where the normalization is given by

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{0}^{2 \pi} d\theta \psi^{*}(\theta) \phi(\theta).\end{aligned} \hspace{\stretch{1}}(2.32)

And the radial by the solution of the PDE

\begin{aligned}r^2 R'' + r R' + \left( \frac{2 m E}{\hbar^2} r^2 E - \lambda^2 \right) R = 0\end{aligned} \hspace{\stretch{1}}(2.33)

Problem 2.


Use the orthogonality property of P_l(\cos\theta)

\begin{aligned}\int_{-1}^1 dx P_l(x) P_{l'}(x) = \frac{2}{2l+1} \delta_{l l'},\end{aligned} \hspace{\stretch{1}}(2.34)

confirm that at least the first two terms of (4.171)

\begin{aligned}e^{i k r \cos\theta} = \sum_{l=0}^\infty (2l + 1) i^l j_l(kr) P_l(\cos\theta)\end{aligned} \hspace{\stretch{1}}(2.35)

are correct.


Taking the inner product using the integral of 2.34 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} P_l'(x) = 2 i^l j_l(kr) \end{aligned} \hspace{\stretch{1}}(2.36)

To confirm the first two terms we need

\begin{aligned}P_0(x) &= 1 \\ P_1(x) &= x \\ j_0(\rho) &= \frac{\sin\rho}{\rho} \\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} - \frac{\cos\rho}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.37)

On the LHS for l'=0 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = 2 \frac{\sin{kr}}{kr}\end{aligned} \hspace{\stretch{1}}(2.41)

On the LHS for l'=1 note that

\begin{aligned}\int dx x e^{i k r x} &= \int dx x \frac{d}{dx} \frac{e^{i k r x}}{ikr} \\ &= x \frac{e^{i k r x}}{ikr} - \frac{e^{i k r x}}{(ikr)^2}.\end{aligned}

So, integration in [-1,1] gives us

\begin{aligned}\int_{-1}^1 dx e^{i k r x} =  -2i \frac{\cos{kr}}{kr} + 2i \frac{1}{{(kr)^2}} \sin{kr}.\end{aligned} \hspace{\stretch{1}}(2.42)

Now compare to the RHS for l'=0, which is

\begin{aligned}2 j_0(kr) = 2 \frac{\sin{kr}}{kr},\end{aligned} \hspace{\stretch{1}}(2.43)

which matches 2.41. For l'=1 we have

\begin{aligned}2 i j_1(kr) = 2i \frac{1}{{kr}} \left( \frac{\sin{kr}}{kr} - \cos{kr} \right),\end{aligned} \hspace{\stretch{1}}(2.44)

which in turn matches 2.42, completing the exersize.

Problem 3.


Obtain the commutation relations \left[{L_i},{L_j}\right] by calculating the vector \mathbf{L} \times \mathbf{L} using the definition \mathbf{L} = \mathbf{r} \times \mathbf{p} directly instead of introducing a differential operator.


Expressing the product \mathbf{L} \times \mathbf{L} in determinant form sheds some light on this question. That is

\begin{aligned}\begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\  L_1 & L_2 & L_3 \\  L_1 & L_2 & L_3\end{vmatrix}&= \mathbf{e}_1 \left[{L_2},{L_3}\right] +\mathbf{e}_2 \left[{L_3},{L_1}\right] +\mathbf{e}_3 \left[{L_1},{L_2}\right]= \mathbf{e}_i \epsilon_{ijk} \left[{L_j},{L_k}\right]\end{aligned} \hspace{\stretch{1}}(2.45)

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using L_i = \epsilon_{ijk} r_j p_k like so

\begin{aligned}\left[{L_i},{L_j}\right]&=\epsilon_{imn} r_m p_n \epsilon_{jab} r_a p_b- \epsilon_{jab} r_a p_b \epsilon_{imn} r_m p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (p_n r_a) p_b- \epsilon_{jab} \epsilon_{imn} r_a (p_b r_m) p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (r_a p_n -i \hbar \delta_{an}) p_b- \epsilon_{jab} \epsilon_{imn} r_a (r_m p_b - i \hbar \delta{mb}) p_n \\ &=\epsilon_{imn} \epsilon_{jab} (r_m r_a p_n p_b - r_a r_m p_b p_n )- i \hbar ( \epsilon_{imn} \epsilon_{jnb} r_m p_b - \epsilon_{jam} \epsilon_{imn} r_a p_n ).\end{aligned}

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

\begin{aligned}\left[{L_i},{L_j}\right]&=i \hbar ( \epsilon_{nim} \epsilon_{njb} r_m p_b - \epsilon_{mja} \epsilon_{min} r_a p_n ) \\ &=i \hbar ( (\delta_{ij} \delta_{mb} -\delta_{ib} \delta_{mj}) r_m p_b - (\delta_{ji} \delta_{an} -\delta_{jn} \delta_{ai}) r_a p_n ) \\ &=i \hbar (\delta_{ij} \delta_{mb} r_m p_b - \delta_{ji} \delta_{an} r_a p_n - \delta_{ib} \delta_{mj} r_m p_b + \delta_{jn} \delta_{ai} r_a p_n ) \\ &=i \hbar (\delta_{ij} r_m p_m- \delta_{ji} r_a p_a- r_j p_i+ r_i p_j ) \\ \end{aligned}

For k \ne i,j, this is i\hbar (\mathbf{r} \times \mathbf{p})_k, so we can write

\begin{aligned}\mathbf{L} \times \mathbf{L} &= i\hbar \mathbf{e}_k \epsilon_{kij} ( r_i p_j - r_j p_i ) = i\hbar \mathbf{L} = i\hbar \mathbf{e}_k L_k = i\hbar \mathbf{L}.\end{aligned} \hspace{\stretch{1}}(2.46)

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

\begin{aligned}\left[{L_i},{L_j}\right] &= i \hbar \epsilon_{ijk} L_k\end{aligned} \hspace{\stretch{1}}(2.47)

as here in Desai. Both say the same thing.

Problem 4.




Problem 5.


A free particle is moving along a path of radius R. Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.


In classical mechanics our Lagrangian for this system is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m R^2 \dot{\theta}^2,\end{aligned} \hspace{\stretch{1}}(2.48)

with the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m R^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(2.49)

Thus the classical Hamiltonian is

\begin{aligned}H = \frac{1}{{2m R^2}} {p_\theta}^2.\end{aligned} \hspace{\stretch{1}}(2.50)

By analogy the QM Hamiltonian operator will therefore be

\begin{aligned}H = -\frac{\hbar^2}{2m R^2} \partial_{\theta\theta}.\end{aligned} \hspace{\stretch{1}}(2.51)

For \Psi = \Theta(\theta) T(t), separation of variables gives us

\begin{aligned}-\frac{\hbar^2}{2m R^2} \frac{\Theta''}{\Theta} = i \hbar \frac{T'}{T} = E,\end{aligned} \hspace{\stretch{1}}(2.52)

from which we have

\begin{aligned}T &\propto e^{-i E t/\hbar} \\ \Theta &\propto e^{ \pm i \sqrt{2m E} R \theta/\hbar }.\end{aligned} \hspace{\stretch{1}}(2.53)

Requiring single valued \Theta, equal at any multiples of 2\pi, we have

\begin{aligned}e^{ \pm i \sqrt{2m E} R (\theta + 2\pi)/\hbar } = e^{ \pm i \sqrt{2m E} R \theta/\hbar },\end{aligned}


\begin{aligned}\pm \sqrt{2m E} \frac{R}{\hbar} 2\pi = 2 \pi n,\end{aligned}

Suffixing the energy values with this index we have

\begin{aligned}E_n = \frac{n^2 \hbar^2}{2 m R^2}.\end{aligned} \hspace{\stretch{1}}(2.55)

Allowing both positive and negative integer values for n we have

\begin{aligned}\Psi = \frac{1}{{\sqrt{2\pi}}} e^{i n \theta} e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.56)

where the normalization was a result of the use of a [0,2\pi] inner product over the angles

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle \equiv \int_0^{2\pi} \psi^{*}(\theta) \phi(\theta) d\theta.\end{aligned} \hspace{\stretch{1}}(2.57)

Problem 6.


Determine \left[{L_i},{r}\right] and \left[{L_i},{\mathbf{r}}\right].


Since L_i contain only \theta and \phi partials, \left[{L_i},{r}\right] = 0. For the position vector, however, we have an angular dependence, and are left to evaluate \left[{L_i},{\mathbf{r}}\right] = r \left[{L_i},{\hat{\mathbf{r}}}\right]. We’ll need the partials for \hat{\mathbf{r}}. We have

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \phi} \\ I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(2.58)

Evaluating the partials we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} = \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}}\end{aligned}


\begin{aligned}\hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R \\ \hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R\end{aligned} \hspace{\stretch{1}}(2.61)

where \tilde{R} R = 1, and \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \hat{\mathbf{r}} = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3, we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 R = \tilde{R} \mathbf{e}_1 R = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.64)

For the \phi partial we have

\begin{aligned}\partial_\phi \hat{\mathbf{r}}&= \mathbf{e}_3 \sin\theta I \hat{\boldsymbol{\phi}} \mathbf{e}_1 \mathbf{e}_2 \\ &= \sin\theta \hat{\boldsymbol{\phi}}\end{aligned}

We are now prepared to evaluate the commutators. Starting with the easiest we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right] \Psi&=-i \hbar (\partial_\phi \hat{\mathbf{r}} \Psi - \hat{\mathbf{r}} \partial_\phi \Psi ) \\ &=-i \hbar (\partial_\phi \hat{\mathbf{r}}) \Psi  \\ \end{aligned}

So we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right]&=-i \hbar \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.65)

Observe that by virtue of chain rule, only the action of the partials on \hat{\mathbf{r}} itself contributes, and all the partials applied to \Psi cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of L_x, and L_y are

\begin{aligned}L_x &= -i \hbar (-S_\phi \partial_\theta - C_\phi \cot\theta \partial_\phi) \\ L_y &= -i \hbar (C_\phi \partial_\theta - S_\phi \cot\theta \partial_\phi),\end{aligned} \hspace{\stretch{1}}(2.66)

where the sines and cosines are written with S, and C respectively for short.

We therefore have

\begin{aligned}\left[{L_x},{\hat{\mathbf{r}}}\right]&= -i \hbar (-S_\phi (\partial_\theta \hat{\mathbf{r}}) - C_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}}) ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi \cot\theta S_\theta \hat{\boldsymbol{\phi}} ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi C_\theta \hat{\boldsymbol{\phi}} ) \\ \end{aligned}


\begin{aligned}\left[{L_y},{\hat{\mathbf{r}}}\right]&= -i \hbar (C_\phi (\partial_\theta \hat{\mathbf{r}}) - S_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}})) \\ &= -i \hbar (C_\phi \hat{\boldsymbol{\theta}} - S_\phi C_\theta \hat{\boldsymbol{\phi}} ).\end{aligned}

Adding back in the factor of r, and summarizing we have

\begin{aligned}\left[{L_i},{r}\right] &= 0 \\ \left[{L_x},{\mathbf{r}}\right] &= -i \hbar r (-\sin\phi \hat{\boldsymbol{\theta}} - \cos\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_y},{\mathbf{r}}\right] &= -i \hbar r (\cos\phi \hat{\boldsymbol{\theta}} - \sin\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_z},{\mathbf{r}}\right] &= -i \hbar r \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.68)

Problem 7.


Show that

\begin{aligned}e^{-i\pi L_x /\hbar } {\lvert {l,m} \rangle} = {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.72)




[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »

Infinite square well wavefunction.

Posted by peeterjoot on May 31, 2010

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Work problem 4.1 from [1], calculation of the eigensolution for an infinite square well, with boundaries [-a/2, a/2]. It’s actually a bit tidier seeming to generalize this slightly to boundaries [a,b], which also implicitly solves the problem. This is surely a problem that is done in 700 other QM texts, but I liked the way I did it this time so am writing it down.


Our equation to solve is i \hbar \Psi_t = -(\hbar^2/2m) \Psi_{xx}. Separation of variables \Psi = T \phi gives us

\begin{aligned}T &\propto e^{-i E t/\hbar } \\ \phi'' &= -\frac{2 m E }{\hbar^2} \phi\end{aligned} \hspace{\stretch{1}}(2.1)

With k^2 = 2 m E/\hbar^2, we have

\begin{aligned}\phi = A e^{i k x } + B e^{-i k x},\end{aligned} \hspace{\stretch{1}}(2.3)

and the usual \phi(a) = \phi(b) = 0 boundary conditions give us

\begin{aligned}0 = \begin{bmatrix}e^{i k a } & e^{-i k a} \\ e^{i k b } & e^{-i k b}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.4)

We must have a zero determinant, which gives us the constraints on k immediately

\begin{aligned}0 &= e^{i k (a - b)} - e^{i k (b-a)} \\ &= 2 i \sin( k (a - b) ).\end{aligned}

So our constraint on k in terms of integers n, and the corresponding integration constant E

\begin{aligned}k &= \frac{n \pi}{b - a} \\ E &= \frac{\hbar^2 n^2 \pi^2 }{2 m (b-a)^2}.\end{aligned} \hspace{\stretch{1}}(2.5)

One of the constants A,B can be eliminated directly by picking any one of the two zeros from 2.4

\begin{aligned}&A e ^{i k a } + B e^{-i k a} = 0 \\ &\implies \\ &B = -A e ^{2 i k a } \end{aligned}

So we have

\begin{aligned}\phi = A \left( e^{i k x } - e^{ ik (2a - x) } \right).\end{aligned} \hspace{\stretch{1}}(2.7)


\begin{aligned}\phi = 2 A i e^{i k a} \sin( k (x-a )) \end{aligned} \hspace{\stretch{1}}(2.8)

Because probability densities, currents and the expectations of any operators will always have paired \phi and \phi^{*} factors, any constant phase factors like i e^{i k a} above can be dropped, or absorbed into the constant A, and we can write

\begin{aligned}\phi = 2 A \sin( k (x-a )) \end{aligned} \hspace{\stretch{1}}(2.9)

The only thing left is to fix A by integrating {\left\lvert{\phi}\right\rvert}^2, for which we have

\begin{aligned}1 &= \int_a^b \phi \phi^{*} dx \\ &= A^2 \int_a^b dx \left( e^{i k x } - e^{ ik (2a - x) } \right) \left( e^{-i k x } - e^{ -ik (2a - x) } \right) \\ &= A^2 \int_a^b dx \left( 2 - e^{ik(2a - 2x)} - e^{ik(-2a + 2x)} \right) \\ &= 2 A^2 \int_a^b dx \left( 1 - \cos (2 k (a - x)) \right)\end{aligned}

This last trig term vanishes over the integration region and we are left with

\begin{aligned}A = \frac{1}{{ \sqrt{2 (b-a)}}},\end{aligned} \hspace{\stretch{1}}(2.10)

which essentially completes the problem. A final substitution back into 2.8 allows for a final tidy up

\begin{aligned}\phi = \sqrt{\frac{2}{b-a}} \sin( k (x-a )).\end{aligned} \hspace{\stretch{1}}(2.11)


[1] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »