# Peeter Joot's Blog.

• ## Archives

 bubba transvere on My letter to the Ontario Energ… Kuba Ober on New faucet installation peeterjoot on Ease of screwing up C string… peeterjoot on Basement electrical now d… peeterjoot on New faucet installation
• ## People not reading this blog: 6,973,738,433 minus:

• 136,403 hits

# Posts Tagged ‘Heisenberg picture’

## PHY456H1F: Quantum Mechanics II. Lecture 6 (Taught by Prof J.E. Sipe). Interaction picture.

Posted by peeterjoot on September 27, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Interaction picture.

## Recap.

Recall our table comparing our two interaction pictures

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

## A motivating example.

While fundamental Hamiltonians are independent of time, in a number of common cases, we can form approximate Hamiltonians that are time dependent. One such example is that of Coulomb excitations of an atom, as covered in section 18.3 of the text [1], and shown in figure (\ref{fig:qmTwoL6fig1}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig1}
\caption{Coulomb interaction of a nucleus and heavy atom.}
\end{figure}

We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be considered classically. From the atoms point of view, the effects of the heavy nucleus barreling by can be described using a time dependent Hamiltonian. For the atom, that interaction Hamiltonian is

\begin{aligned}H' = \sum_i \frac{ Z e q_i }{{\left\lvert{\mathbf{r}_N(t) - \mathbf{R}_i}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here and $\mathbf{r}_N$ is the position vector for the heavy nucleus, and $\mathbf{R}_i$ is the position to each charge within the atom, where $i$ ranges over all the internal charges, positive and negative, within the atom.

Placing the origin close to the atom, we can write this interaction Hamiltonian as

\begin{aligned}H'(t) = \not{{\sum_i \frac{Z e q_i}{{\left\lvert{\mathbf{r}_N(t)}\right\rvert}}}}+ \sum_i Z e q_i \mathbf{R}_i \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{r}}} \frac{1}{{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{r} = 0}}\end{aligned} \hspace{\stretch{1}}(2.2)

The first term vanishes because the total charge in our neutral atom is zero. This leaves us with

\begin{aligned}\begin{aligned}H'(t) &= -\sum_i q_i \mathbf{R}_i \cdot {\left.{{\left(-\frac{\partial {}}{\partial {\mathbf{r}}} \frac{ Z e}{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}\right)}}\right\vert}_{{\mathbf{r} = 0}} \\ &= - \sum_i q_i \mathbf{R}_i \cdot \mathbf{E}(t),\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where $\mathbf{E}(t)$ is the electric field at the origin due to the nucleus.

Introducing a dipole moment operator for the atom

\begin{aligned}\boldsymbol{\mu} = \sum_i q_i \mathbf{R}_i,\end{aligned} \hspace{\stretch{1}}(2.4)

the interaction takes the form

\begin{aligned}H'(t) = -\boldsymbol{\mu} \cdot \mathbf{E}(t).\end{aligned} \hspace{\stretch{1}}(2.5)

Here we have a quantum mechanical operator, and a classical field taken together. This sort of dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated classically as depicted in figure (\ref{fig:qmTwoL6fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig2}
\caption{atom in a field}
\end{figure}

In the figure, we can use the dipole interaction, provided $\lambda \gg a$, where $a$ is the “width” of the atom.

Because it is great for examples, we will see this dipole interaction a lot.

## The interaction picture.

Having talked about both the Schr\”{o}dinger and Heisenberg pictures, we can now move on to describe a hybrid, one where our Hamiltonian has been split into static and time dependent parts

\begin{aligned}H(t) = H_0 + H'(t)\end{aligned} \hspace{\stretch{1}}(2.6)

We will formulate an approach for dealing with problems of this sort called the interaction picture.

This is also covered in section 3.3 of the text, albeit in a much harder to understand fashion (the text appears to try to not pull the result from a magic hat, but the steps to get to the end result are messy). It would probably have been nicer to see it this way instead.

In the Schr\”{o}dinger picture our dynamics have the form

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.7)

How about the Heisenberg picture? We look for a solution

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.8)

We want to find this operator that evolves the state from the state as some initial time $t_0$, to the arbitrary later state found at time $t$. Plugging in we have

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}=H(t) U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.9)

This has to hold for all ${\lvert {\psi_s(t_0)} \rangle}$, and we can equivalently seek a solution of the operator equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) = H(t) U(t, t_0),\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}U(t_0, t_0) = I,\end{aligned} \hspace{\stretch{1}}(2.11)

the identity for the Hilbert space.

Suppose that $H(t)$ was independent of time. We could find that

\begin{aligned}U(t, t_0) = e^{-i H(t - t_0)/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.12)

If $H(t)$ depends on time could you guess that

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}\end{aligned} \hspace{\stretch{1}}(2.13)

holds? No. This may be true when $H(t)$ is a number, but when it is an operator, the Hamiltonian does not necessarily commute with itself at different times

\begin{aligned}\left[{H(t')},{H(t'')}\right] \ne 0.\end{aligned} \hspace{\stretch{1}}(2.14)

So this is wrong in general. As an aside, for numbers, 2.13 can be verified easily. We have

\begin{aligned}i \hbar \left( e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \right)'&=i \hbar \left( -\frac{i}{\hbar} \right) \left( \int_{t_0}^t H(\tau) d\tau \right)'e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau } \\ &=\left( H(t) \frac{dt}{dt} - H(t_0) \frac{dt_0}{dt} \right)e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \\ &= H(t) U(t, t_0)\end{aligned}

## Expectations

Suppose that we do find $U(t, t_0)$. Then our expectation takes the form

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} = {\langle {\psi_s(t_0)} \rvert} U^\dagger(t, t_0) O_s U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \end{aligned} \hspace{\stretch{1}}(2.15)

Put

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.16)

and form

\begin{aligned}O_H = U^\dagger(t, t_0) O_s U(t, t_0) \end{aligned} \hspace{\stretch{1}}(2.17)

so that our expectation has the familiar representations

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \end{aligned} \hspace{\stretch{1}}(2.18)

## New strategy. Interaction picture.

Let’s define

\begin{aligned}U_I(t, t_0) = e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\end{aligned} \hspace{\stretch{1}}(2.19)

or

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.20)

Let’s see how this works. We have

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} &= i \hbar \frac{d{{}}}{dt} \left(e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( i \hbar \frac{d{{}}}{dt} U(t, t_0) \right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( (H + H'(t)) U(t, t_0) \right) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) U(t, t_0) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned}

Define

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.21)

so that our operator equation takes the form

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) = \bar{H}'(t) U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.22)

Note that we also have the required identity at the initial time

\begin{aligned}U_I(t_0, t_0) = I.\end{aligned} \hspace{\stretch{1}}(2.23)

Without requiring us to actually find $U(t, t_0)$ all of the dynamics of the time dependent interaction are now embedded in our operator equation for $\bar{H}'(t)$, with all of the simple interaction related to the non time dependent portions of the Hamiltonian left separate.

## Connection with the Schr\”{o}dinger picture.

In the Schr\”{o}dinger picture we have

\begin{aligned}{\lvert {\psi_s(t)} \rangle} &= U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \\ &=e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0){\lvert {\psi_s(t_0)} \rangle}.\end{aligned}

With a definition of the interaction picture ket as

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle} = U_I(t, t_0) {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.24)

the Schr\”{o}dinger picture is then related to the interaction picture by

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = e^{-\frac{i}{\hbar} H_0(t - t_0)} {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.25)

Also, by multiplying 2.22 by our Schr\”{o}dinger ket, we remove the last vestiges of $U_I$ and $U$ from the dynamical equation for our time dependent interaction

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.26)

## Interaction picture expectation.

Inverting 2.25, we can form an operator expectation, and relate it the interaction and Schr\”{o}dinger pictures

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)}{\lvert {\psi_I} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.27)

With a definition

\begin{aligned}O_I =e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.28)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.29)

As before, the time evolution of our interaction picture operator, can be found by taking derivatives of 2.28, for which we find

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.30)

## Summarizing the interaction picture.

Given

\begin{aligned}H(t) = H_0 + H'(t),\end{aligned} \hspace{\stretch{1}}(2.31)

and initial time states

\begin{aligned}{\lvert {\psi_I(t_0)} \rangle} ={\lvert {\psi_s(t_0)} \rangle} = {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.32)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.33)

where

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.34)

and

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.35)

or

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) &= \bar{H}'(t) U_I(t, t_0) \\ U_I(t_0, t_0) &= I.\end{aligned} \hspace{\stretch{1}}(2.36)

Our interaction picture Hamiltonian is

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.38)

and for Schr\”{o}dinger operators, independent of time, we have the dynamical equation

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.39)

# Justifying the Taylor expansion above (not class notes).

## Multivariable Taylor series

As outlined in section 2.8 ($8.10$) of [2], we want to derive the multi-variable Taylor expansion for a scalar valued function of some number of variables

\begin{aligned}f(\mathbf{u}) = f(u^1, u^2, \cdots),\end{aligned} \hspace{\stretch{1}}(3.40)

consider the displacement operation applied to the vector argument

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{f(\mathbf{a} + t \mathbf{x})}}\right\vert}_{{t=1}}.\end{aligned} \hspace{\stretch{1}}(3.41)

We can Taylor expand a single variable function without any trouble, so introduce

\begin{aligned}g(t) = f(\mathbf{a} + t \mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.42)

where

\begin{aligned}g(1) = f(\mathbf{a} + \mathbf{x}).\end{aligned} \hspace{\stretch{1}}(3.43)

We have

\begin{aligned}g(t) = g(0) + t {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{t^2}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots,\end{aligned} \hspace{\stretch{1}}(3.44)

so that

\begin{aligned}g(1) = g(0) + + {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{1}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots.\end{aligned} \hspace{\stretch{1}}(3.45)

The multivariable Taylor series now becomes a plain old application of the chain rule, where we have to evaluate

\begin{aligned}\frac{dg}{dt} &= \frac{d{{}}}{dt} f(a^1 + t x^1, a^2 + t x^2, \cdots) \\ &= \sum_i \frac{\partial {}}{\partial {(a^i + t x^i)}} f(\mathbf{a} + t \mathbf{x}) \frac{\partial {a^i + t x^i}}{\partial {t}},\end{aligned}

so that

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}}= \sum_i x^i \left( {\left.{{ \frac{\partial {f}}{\partial {x^i}}}}\right\vert}_{{x^i = a^i}}\right).\end{aligned} \hspace{\stretch{1}}(3.46)

Assuming an Euclidean space we can write this in the notationally more pleasant fashion using a gradient operator for the space

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}} = {\left.{{\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.47)

To handle the higher order terms, we repeat the chain rule application, yielding for example

\begin{aligned}{\left.{{\frac{d^2 f(\mathbf{a} + t \mathbf{x})}{dt^2} }}\right\vert}_{{t=0}} &={\left.{{\frac{d{{}}}{dt} \sum_i x^i \frac{\partial {f(\mathbf{a} + t \mathbf{x})}}{\partial {(a^i + t x^i)}} }}\right\vert}_{{t=0}}\\ &={\left.{{\sum_i x^i \frac{\partial {}}{\partial {(a^i + t x^i)}} \frac{d{{f(\mathbf{a} + t \mathbf{x})}}}{dt}}}\right\vert}_{{t=0}} \\ &={\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^2 f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned}

Thus the Taylor series associated with a vector displacement takes the tidy form

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} {\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^k f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.48)

Even more fancy, we can form the operator equation

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{e^{ \mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} } f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}\end{aligned} \hspace{\stretch{1}}(3.49)

Here a dummy variable $\mathbf{u}$ has been retained as an instruction not to differentiate the $\mathbf{x}$ part of the directional derivative in any repeated applications of the $\mathbf{x} \cdot \boldsymbol{\nabla}$ operator.

That notational cludge can be removed by swapping $\mathbf{a}$ and $\mathbf{x}$

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} (\mathbf{a} \cdot \boldsymbol{\nabla})^k f(\mathbf{x})=e^{ \mathbf{a} \cdot \boldsymbol{\nabla} } f(\mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.50)

where $\boldsymbol{\nabla} = \boldsymbol{\nabla}_{\mathbf{x}} = ({\partial {}}/{\partial {x^1}}, {\partial {}}/{\partial {x^2}}, ...)$.

Having derived this (or for those with lesser degrees of amnesia, recall it), we can see that 2.2 was a direct application of this, retaining no second order or higher terms.

Our expression used in the interaction Hamiltonian discussion was

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{R} = 0}}.\end{aligned} \hspace{\stretch{1}}(3.51)

which we can see has the same structure as above with some variable substitutions. Evaluating it we have

\begin{aligned}\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}&=\mathbf{e}_i \frac{\partial {}}{\partial {R^i}} ((x^j - R^j)^2)^{-1/2} \\ &=\mathbf{e}_i \left(-\frac{1}{{2}}\right) 2 (x^j - R^j) \frac{\partial {(x^j - R^j)}}{\partial {R^i}} \frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3}} \\ &= \frac{\mathbf{r} - \mathbf{R}}{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3} ,\end{aligned}

and at $\mathbf{R} = 0$ we have

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3}.\end{aligned} \hspace{\stretch{1}}(3.52)

We see in this direction derivative produces the classical electric Coulomb field expression for an electrostatic distribution, once we take the $\mathbf{r}/{\left\lvert{\mathbf{r}}\right\rvert}^3$ and multiply it with the $- Z e$ factor.

## With algebra.

A different way to justify the expansion of 2.2 is to consider a Clifford algebra factorization (following notation from [3]) of the absolute vector difference, where $\mathbf{R}$ is considered small.

\begin{aligned}{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}&= \sqrt{ \left(\mathbf{r} - \mathbf{R}\right) \left(\mathbf{r} - \mathbf{R}\right) } \\ &= \sqrt{ \left\langle{{\mathbf{r} \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) \mathbf{r}}}\right\rangle } \\ &= \sqrt{ \left\langle{{\mathbf{r}^2 \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) }}\right\rangle } \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \left\langle{{\frac{1}{\mathbf{r}} \mathbf{R} \mathbf{R} \frac{1}{\mathbf{r}}}}\right\rangle} \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \frac{\mathbf{R}^2}{\mathbf{r}^2}}\end{aligned}

Neglecting the $\mathbf{R}^2$ term, we can then Taylor series expand this scalar expression

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} \left( 1 + \frac{1}{\mathbf{r}} \cdot \mathbf{R}\right) =\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\hat{\mathbf{r}}}{\mathbf{r}^2} \cdot \mathbf{R}=\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3} \cdot \mathbf{R}.\end{aligned} \hspace{\stretch{1}}(3.53)

Observe this is what was found with the multivariable Taylor series expansion too.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[3] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

## PHY456H1F: Quantum Mechanics II. Lecture 5 (Taught by Prof J.E. Sipe). Pertubation theory and degeneracy. Review of dynamics

Posted by peeterjoot on September 26, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Issues concerning degeneracy.

## When the perturbed state is non-degenerate.

Suppose the state of interest is non-degenerate but others are

FIXME: diagram. states designated by dashes labeled $n1$, $n2$, $n3$ degeneracy $\alpha = 3$ for energy $E_n^{(0)}$.

This is no problem except for notation, and if the analysis is repeated we find

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.1)

where

\begin{aligned}{H_{m \alpha ; s}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

## When the perturbed state is also degenerate.

FIXME: diagram. states designated by dashes labeled $n1$, $n2$, $n3$ degeneracy $\alpha = 3$ for energy $E_n^{(0)}$, and states designated by dashes labeled $s1$, $s2$, $s3$ degeneracy $\alpha = 3$ for energy $E_s^{(0)}$.

If we just blindly repeat the derivation for the non-degenerate case we would obtain

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{s1 ; s1}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \lambda^2 \sum_{\alpha \ne 1} \frac{{\left\lvert{{H_{s \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - {red} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \lambda\sum_{\alpha \ne s1} \frac{{H_{s \alpha ; s1}}'}{ E_s^{(0)} - {red} } {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}{H_{m \alpha ; s1}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.6)

Note that the $E_s^{(0)} -{red}$ is NOT a typo, and why we run into trouble. There is one case where a perturbation approach is still possible. That case is if we happen to have

\begin{aligned}{\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.7)

That may not be obvious, but if one returns to the original derivation, the right terms cancel so that one will not end up with the $0/0$ problem.

FIXME: do this derivation.

## Diagonalizing the perturbation Hamiltonian.

Suppose that we do not have this special zero condition that allows the perturbation treatment to remain valid. What can we do. It turns out that we can make use of the fact that the perturbation Hamiltonian is Hermitian, and diagonalize the matrix

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.8)

In the example of a two fold degeneracy, this amounts to us choosing not to work with the states

\begin{aligned}{\lvert {\psi_{s1}^{(0)}} \rangle}, {\lvert {\psi_{s2}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.9)

both some linear combinations of the two

\begin{aligned}{\lvert {\psi_{sI}^{(0)}} \rangle} &= a_1 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_1 {\lvert {\psi_{s2}^{(0)}} \rangle} \\ {\lvert {\psi_{sII}^{(0)}} \rangle} &= a_2 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_2 {\lvert {\psi_{s2}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.10)

In this new basis, once found, we have

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} = \mathcal{H}_\alpha \delta_{\alpha \beta}\end{aligned} \hspace{\stretch{1}}(2.12)

Utilizing this to fix the previous, one would get if the analysis was repeated correctly

\begin{aligned}E_{s\alpha} &= E_s^{(0)} + \lambda {H_{s\alpha ; s\alpha}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \beta ; s \alpha}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_{s \alpha}} \rangle} &= {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \beta} \frac{{H_{m \beta ; s \alpha}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \beta}}^{(0)}} \rangle}+ \cdots.\end{aligned} \hspace{\stretch{1}}(2.13)

We see that a degenerate state can be split by applying perturbation.

FIXME: do this derivation.

FIXME: diagram. $E_s^{(0)}$ as one energy level without perturbation, and as two distinct levels with perturbation.

guess I’ll bet that this is the origin of the spectral line splitting, especially given that an atom like hydrogen has degenerate states.

# Review of dynamics.

We want to move on to time dependent problems. In general for a time dependent problem, the answer follows provided one has solved for all the perturbed energy eigenvalues. This can be laborious (or not feasible due to infinite sums).

Before doing this, let’s review our dynamics as covered in section 3 of the text [1].

## Schr\”{o}dinger and Heisenberg pictures

Our operator equation in the Schr\”{o}dinger picture is the familiar

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.15)

and most of our operators $X, P, \cdots$ are time independent.

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(t)} \rvert} O_s{\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.16)

where $O_s$ is the operator in the Schr\”{o}dinger picture, and is non time dependent.

Formally, the time evolution of any state is given by

\begin{aligned}{\lvert {\psi_s(t)} \rangle}e^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle} = U(t, 0) {\lvert {\psi_s(0)} \rangle} \end{aligned} \hspace{\stretch{1}}(3.17)

so the expectation of an operator can be written

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(0)} \rvert} e^{i H t/\hbar}O_se^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.18)

With the introduction of the Heisenberg ket

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(0)} \rangle},\end{aligned} \hspace{\stretch{1}}(3.19)

and Heisenberg operators

\begin{aligned}O_H = e^{i H t/\hbar} O_s e^{-i H t/\hbar},\end{aligned} \hspace{\stretch{1}}(3.20)

the expectation evolution takes the form

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_H} \rvert} O_H{\lvert {\psi_H} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.21)

Note that because the Hamiltonian commutes with it’s exponential (it commutes with itself and any power series of itself), the Hamiltonian in the Heisenberg picture is the same as in the Schr\”{o}dinger picture

\begin{aligned}H_H = e^{i H t/\hbar} H e^{-i H t/\hbar} = H.\end{aligned} \hspace{\stretch{1}}(3.22)

### Time evolution and the Commutator

Taking the derivative of 3.20 provides us with the time evolution of any operator in the Heisenberg picture

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) &=i \hbar \frac{d}{dt} \left( e^{i H t/\hbar} O_s e^{-i H t/\hbar}\right) \\ &=i \hbar \left( \frac{i H}{\hbar} e^{i H t/\hbar} O_s e^{-i H t/\hbar}+e^{i H t/\hbar} O_s e^{-i H t/\hbar} \frac{-i H}{\hbar} \right) \\ &=\left( -H O_H+O_H H\right).\end{aligned}

We can write this as a commutator

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right].\end{aligned} \hspace{\stretch{1}}(3.23)

### Summarizing the two pictures.

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY356F: Quantum Mechanics I. Lecture 11 notes. Harmonic Oscillator.

Posted by peeterjoot on November 30, 2010

# Setup.

Why study this problem?

It is relevant to describing the oscillation of molecules, quantum states of light, vibrations of the lattice structure of a solid, and so on.

FIXME: projected picture of masses on springs, with a ladle shaped well, approximately Harmonic about the minimum of the bucket.

The problem to solve is the one dimensional Hamiltonian

\begin{aligned}V(X) &= \frac{1}{{2}} K X^2 \\ K &= m \omega^2 \\ H &= \frac{P^2}{2m} + V(X)\end{aligned} \hspace{\stretch{1}}(8.168)

where $m$ is the mass, $\omega$ is the frequency, $X$ is the position operator, and $P$ is the momentum operator. Of these quantities, $\omega$ and $m$ are classical quantities.

This problem can be used to illustrate some of the reasons why we study the different pictures (Heisenberg, Interaction and Schr\”{o}dinger). This is a problem well suited to all of these (FIXME: lookup an example of this with the interaction picture. The book covers H and S methods.

We attack this with a non-intuitive, but cool technique. Introduce the raising $a^\dagger$ and lowering $a$ operators:

\begin{aligned}a &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X + i \frac{P}{m\omega} \right) \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X - i \frac{P}{m\omega} \right)\end{aligned} \hspace{\stretch{1}}(8.171)

\paragraph{Question:} are we using the dagger for more than Hermitian conjugation in this case.
\paragraph{Answer:} No, this is precisely the Hermitian conjugation operation.

Solving for $X$ and $P$ in terms of $a$ and $a^\dagger$, we have

\begin{aligned}a + a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 X \\ a - a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 i \frac{P }{m \omega}\end{aligned}

or

\begin{aligned}X &= \sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \\ P &= i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\end{aligned} \hspace{\stretch{1}}(8.173)

Express $H$ in terms of $a$ and $a^\dagger$

\begin{aligned}H &= \frac{P^2}{2m} + \frac{1}{{2}} K X^2 \\ &= \frac{1}{2m} \left(i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\right)^2+ \frac{1}{{2}} m \omega^2\left(\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \right)^2 \\ &= \frac{-\hbar \omega}{4} \left(a^\dagger a^\dagger + a^2 - a a^\dagger - a^\dagger a\right)+ \frac{\hbar \omega}{4}\left(a^\dagger a^\dagger + a^2 + a a^\dagger + a^\dagger a\right) \\ \end{aligned}

\begin{aligned}H= \frac{\hbar \omega}{2} \left(a a^\dagger + a^\dagger a\right) = \frac{\hbar \omega}{2} \left(2 a^\dagger a + \left[{a},{a^\dagger}\right]\right) \end{aligned} \hspace{\stretch{1}}(8.175)

Since $\left[{X},{P}\right] = i \hbar \mathbf{1}$ then we can show that $\left[{a},{a^\dagger}\right] = \mathbf{1}$. Solve for $\left[{a},{a^\dagger}\right]$ as follows

\begin{aligned}i \hbar &=\left[{X},{P}\right] \\ &=\left[{\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) },{i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)}\right] \\ &=\sqrt{\frac{\hbar}{2 m \omega}} i \sqrt{\frac{\hbar m \omega}{2}} \left[{a^\dagger + a},{a^\dagger -a}\right] \\ &= \frac{i \hbar}{2}\left(\left[{a^\dagger},{a^\dagger}\right] -\left[{a^\dagger},{a}\right] +\left[{a},{a^\dagger}\right] -\left[{a},{a}\right] \right) \\ &= \frac{i \hbar}{2}\left(0+2 \left[{a},{a^\dagger}\right] -0\right)\end{aligned}

Comparing LHS and RHS we have as stated

\begin{aligned}\left[{a},{a^\dagger}\right] = \mathbf{1}\end{aligned} \hspace{\stretch{1}}(8.176)

and thus from 8.175 we have

\begin{aligned}H = \hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right)\end{aligned} \hspace{\stretch{1}}(8.177)

Let ${\lvert {n} \rangle}$ be the eigenstate of $H$ so that $H{\lvert {n} \rangle} = E_n {\lvert {n} \rangle}$. From 8.177 we have

\begin{aligned}H {\lvert {n} \rangle} =\hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.178)

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} + \frac{{\lvert {n} \rangle}}{2} = \frac{E_n}{\hbar \omega} {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.179)

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.180)

We wish now to find the eigenstates of the “Number” operator $a^\dagger a$, which are simultaneously eigenstates of the Hamiltonian operator.

Observe that we have

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger ( a a^\dagger {\lvert {n} \rangle} ) \\ &= a^\dagger ( \mathbf{1} + a^\dagger a ) {\lvert {n} \rangle}\end{aligned}

where we used $\left[{a},{a^\dagger}\right] = a a^\dagger - a^\dagger a = \mathbf{1}$.

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger \left( \mathbf{1} + \frac{E_n}{\hbar\omega} - \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle} \\ &= a^\dagger \left( \frac{E_n}{\hbar\omega} + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle},\end{aligned}

or

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) = (\lambda_n + 1) (a^\dagger {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.181)

The new state $a^\dagger {\lvert {n} \rangle}$ is presumed to lie in the same space, expressible as a linear combination of the basis states in this space. We can see the effect of the operator $a a^\dagger$ on this new state, we find that the energy is changed, but the state is otherwise unchanged. Any state $a^\dagger {\lvert {n} \rangle}$ is an eigenstate of $a^\dagger a$, and therefore also an eigenstate of the Hamiltonian.

Play the same game and win big by discovering that

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} ) = (\lambda_n -1) (a {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.182)

There will be some state ${\lvert {0} \rangle}$ such that

\begin{aligned}a {\lvert {0} \rangle} = 0 {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(8.183)

which implies

\begin{aligned}a^\dagger (a {\lvert {0} \rangle}) = (a^\dagger a) {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(8.184)

so from 8.180 we have

\begin{aligned}\lambda_0 = 0\end{aligned} \hspace{\stretch{1}}(8.185)

Observe that we can identify $\lambda_n = n$ for

\begin{aligned}\lambda_n = \left( \frac{E_n}{\hbar\omega} - \frac{1}{{2}} \right) = n,\end{aligned} \hspace{\stretch{1}}(8.186)

or

\begin{aligned}\frac{E_n}{\hbar\omega} = n + \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(8.187)

or

\begin{aligned}E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(8.188)

where $n = 0, 1, 2, \cdots$.

We can write

\begin{aligned}\hbar \omega \left( a^\dagger a + \frac{1}{{2}} \mathbf{1} \right) {\lvert {n} \rangle} &= E_n {\lvert {n} \rangle} \\ a^\dagger a {\lvert {n} \rangle} + \frac{1}{{2}} {\lvert {n} \rangle} &= \frac{E_n}{\hbar \omega} {\lvert {n} \rangle} \\ \end{aligned}

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.189)

We call this operator $a^\dagger a = N$, the number operator, so that

\begin{aligned}N {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.190)

# Relating states.

Recall the calculation we performed for

\begin{aligned}L_{+} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m+1} \rangle} \\ L_{-} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.191)

Where $C_{+}$, and $C_{+}$ are constants. The next game we are going to play is to work out $C_n$ for the lowering operation

\begin{aligned}a{\lvert {n} \rangle} = C_n {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.193)

and the raising operation

\begin{aligned}a^\dagger {\lvert {n} \rangle} = B_n {\lvert {n+1} \rangle}.\end{aligned} \hspace{\stretch{1}}(9.194)

For the Hermitian conjugate of $a {\lvert {n} \rangle}$ we have

\begin{aligned}(a {\lvert {n} \rangle})^\dagger = ( C_n {\lvert {n-1} \rangle} )^\dagger = C_n^{*} {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.195)

So

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = C_n C_n^{*} \left\langle{{n-1}} \vert {{n-1}}\right\rangle = {\left\lvert{C_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.196)

Expanding the LHS we have

\begin{aligned}{\left\lvert{C_n}\right\rvert}^2 &={\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} \\ &={\langle {n} \rvert} n {\lvert {n} \rangle} \\ &=n \left\langle{{n}} \vert {{n}}\right\rangle \\ &=n \end{aligned}

For

\begin{aligned}C_n = \sqrt{n}\end{aligned} \hspace{\stretch{1}}(9.197)

Similarly

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = B_n B_n^{*} \left\langle{{n+1}} \vert {{n+1}}\right\rangle = {\left\lvert{B_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.198)

and

\begin{aligned}{\left\lvert{B_n}\right\rvert}^2 &={\langle {n} \rvert} \underbrace{a a^\dagger}_{a a^\dagger - a^\dagger a = \mathbf{1}} {\lvert {n} \rangle} \\ &={\langle {n} \rvert} \left( \mathbf{1} + a^\dagger a \right) {\lvert {n} \rangle} \\ &=(1 + n) \left\langle{{n}} \vert {{n}}\right\rangle \\ &=1 + n \end{aligned}

for

\begin{aligned}B_n = \sqrt{n + 1}\end{aligned} \hspace{\stretch{1}}(9.199)

# Heisenberg picture.

\paragraph{How does the lowering operator $a$ evolve in time?}

\paragraph{A:} Recall that for a general operator $A$, we have for the time evolution of that operator

\begin{aligned}i \hbar \frac{d A}{dt} = \left[{ A },{H}\right]\end{aligned} \hspace{\stretch{1}}(10.200)

Let’s solve this one.

\begin{aligned}i \hbar \frac{d a}{dt} &= \left[{ a },{H}\right] \\ &= \left[{ a },{ \hbar \omega (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ a^\dagger a }\right] \\ &= \hbar\omega \left( a a^\dagger a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a a^\dagger) a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a^\dagger a + \mathbf{1}) a - a^\dagger a a \right) \\ &= \hbar\omega a \end{aligned}

Even though $a$ is an operator, it can undergo a time evolution and we can think of it as a function, and we can solve for $a$ in the differential equation

\begin{aligned}\frac{d a}{dt} = -i \omega a \end{aligned} \hspace{\stretch{1}}(10.201)

This has the solution

\begin{aligned}a = a(0) e^{-i \omega t}\end{aligned} \hspace{\stretch{1}}(10.202)

here $a(0)$ is an operator, the value of that operator at $t = 0$. The exponential here is just a scalar (not effected by the operator so we can put it on either side of the operator as desired).

\paragraph{CHECK:}

\begin{aligned}a' = a(0) \frac{d}{dt} e^{-i \omega t} = a(0) (-i \omega) e^{-i \omega t} = -i \omega a\end{aligned} \hspace{\stretch{1}}(10.203)

# A couple comments on the Schr\”{o}dinger picture.

We don’t do this in class, but it is very similar to the approach of the hydrogen atom. See the text for full details.

In the Schr\”{o}dinger picture,

\begin{aligned}-\frac{\hbar^2}{2m} \frac{d^2 u}{dx^2} + \frac{1}{{2}} m \omega^2 x^2 u = E u\end{aligned} \hspace{\stretch{1}}(11.204)

This does directly to the wave function representation, but we can relate these by noting that we get this as a consequence of the identification $u = u(x) = \left\langle{{x}} \vert {{u}}\right\rangle$.

In 11.204, we can switch to dimensionless quantities with

\begin{aligned}\xi = \text{xi (z)''} = \alpha x\end{aligned} \hspace{\stretch{1}}(11.205)

with

\begin{aligned}\alpha = \sqrt{\frac{m \omega}{\hbar}}\end{aligned} \hspace{\stretch{1}}(11.206)

This gives, with $\lambda = 2E/\hbar\omega$,

\begin{aligned}\frac{d^2 u}{d\xi^2} + (\lambda - \xi^2) u = 0\end{aligned} \hspace{\stretch{1}}(11.207)

We can use polynomial series expansion methods to solve this, and find that we require a terminating expression, and write this in terms of the Hermite polynomials (courtesy of the clever French once again).

When all is said and done we will get the energy eigenvalues once again

\begin{aligned}E = E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(11.208)

# Back to the Heisenberg picture.

Let us express

\begin{aligned}\left\langle{{x}} \vert {{n}}\right\rangle = u_n(x)\end{aligned} \hspace{\stretch{1}}(12.209)

With

\begin{aligned}a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(12.210)

we have

\begin{aligned}0 =\left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle},\end{aligned} \hspace{\stretch{1}}(12.211)

and

\begin{aligned}0 &= {\langle {x} \rvert} \left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle} \\ &= {\langle {x} \rvert} X {\lvert {0 } \rangle} + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ &= x \left\langle{{x}} \vert {{0}}\right\rangle + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ \end{aligned}

Recall that our matrix operator is

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\end{aligned} \hspace{\stretch{1}}(12.212)

\begin{aligned}{\langle {x} \rvert} P {\lvert {0} \rangle} &={\langle {x} \rvert} P \underbrace{\int {\lvert {x'} \rangle} {\langle {x'} \rvert} dx' }_{= \mathbf{1}}{\lvert {0} \rangle} \\ &=\int {\langle {x} \rvert} P {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\int \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\left( -i \hbar \frac{d}{dx} \right)\left\langle{{x}} \vert {{0}}\right\rangle\end{aligned}

We have then

\begin{aligned}0 =x u_0(x) + \frac{\hbar}{m \omega} \frac{d u_0(x)}{dx}\end{aligned} \hspace{\stretch{1}}(12.213)

NOTE: picture of the solution to this LDE on slide…. but I didn’t look closely enough.

## Notes and problems for Desai chapterĀ III.

Posted by peeterjoot on October 9, 2010

# Notes.

Chapter III notes and problems for [1].

FIXME:
Some puzzling stuff in the interaction section and superposition of time-dependent states sections. Work through those here.

# Problems

## Problem 1. Virial Theorem.

### Statement.

With the assumption that $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time, and

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{r}) = T + V\end{aligned} \hspace{\stretch{1}}(2.1)

show that

\begin{aligned}2 \left\langle{{T}}\right\rangle = \left\langle{{ \mathbf{r} \cdot \boldsymbol{\nabla} V}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(2.2)

### Solution.

I floundered with this a bit, but found the required hint in physicsforums. We can start with the Hamiltonian time derivative relation

\begin{aligned}i\hbar \frac{d A_H}{dt} = \left[{A_H},{H}\right]\end{aligned} \hspace{\stretch{1}}(2.3)

So, with the assumption that $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time, and the use of a stationary state ${\lvert {\psi} \rangle}$ for the expectation calculation we have

\begin{aligned}0 &=\frac{d}{dt} \left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle \\ &=\frac{d}{dt} {\langle {\psi} \rvert} \mathbf{r} \cdot \mathbf{p} {\lvert {\psi} \rangle} \\ &={\langle {\psi} \rvert} \frac{d}{dt} ( \mathbf{r} \cdot \mathbf{p} ) {\lvert {\psi} \rangle} \\ &= \frac{1}{{i\hbar}} \left\langle{{ \left[{ \mathbf{r} \cdot \mathbf{p} },{H}\right] }}\right\rangle \\ &= -\left\langle{{ \left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{\frac{\mathbf{p}^2}{2m}}\right] }}\right\rangle -\left\langle{{ \left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{V(\mathbf{r})}\right] }}\right\rangle.\end{aligned}

The exercise now becomes one of evaluating the remaining commutators. For the Laplacian commutator we have

\begin{aligned}\left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{\boldsymbol{\nabla}^2}\right] \psi&=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n x_m \partial_m \psi \\ &=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n \psi - \partial_n x_m \partial_n \partial_m \psi \\ &=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n \psi - \partial_n \partial_n \psi - x_m \partial_n \partial_n \partial_m \psi \\ &=- 2 \boldsymbol{\nabla}^2 \psi\end{aligned}

For the potential commutator we have

\begin{aligned}\left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{V(\mathbf{r})}\right] \psi&=x_m \partial_m V \psi -V x_m \partial_m \psi \\ &=x_m (\partial_m V) \psi x_m V \partial_m \psi -V x_m \partial_m \psi \\ &=\Bigl( \mathbf{r} \cdot (\boldsymbol{\nabla} V) \Bigr) \psi\end{aligned}

Putting all the $\hbar$ factors back in, we get

\begin{aligned}2 \left\langle{{ \frac{\mathbf{p}^2}{2m} }}\right\rangle = \left\langle{{ \mathbf{r} \cdot (\boldsymbol{\nabla} V) }}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.4)

which is the desired result.

Followup: why assume $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time?

## Problem 2. Application of virial theorem.

Calculate $\left\langle{{T}}\right\rangle$ with $V = \lambda \ln(r/a)$.

\begin{aligned}\mathbf{r} \cdot \boldsymbol{\nabla} V &= r \hat{\mathbf{r}} \cdot \hat{\mathbf{r}} \lambda \frac{\partial {\ln(r/a)}}{\partial {r}} \\ &= \lambda r \frac{1}{{a}} \frac{a}{r} \\ &= \lambda \\ \implies \\ \left\langle{{T}}\right\rangle &= \lambda/2\end{aligned}

## Problem 3. Heisenberg Position operator representation.

### Part I.

Express $x$ as an operator $x_H$ for $H = \mathbf{p}^2/2m$.

With

\begin{aligned}{\langle {\psi} \rvert} x {\lvert {\psi} \rangle} = {\langle {\psi_0} \rvert} U^\dagger x U {\lvert {\psi_0} \rangle}\end{aligned}

We want to expand

\begin{aligned}x_H &= U^\dagger x U \\ &= e^{i H t/\hbar} x e^{-iH t/\hbar} \\ &= \sum_{k,l = 0}^\infty \frac{1}{{k!}} \frac{1}{{l!}} \left(\frac{i H t}{\hbar}\right)^k x \left(\frac{-i H t}{\hbar}\right)^l .\end{aligned}

We to evaluate $H^k x H^l$ to proceed. Using $p^n x = -i \hbar n p^{n-1} + x p^n$, we have

\begin{aligned}H^k x &= \frac{1}{{(2m)^k}} p^2k x \\ &= \frac{1}{{(2m)^k}} \left( -i \hbar (2k) p^{2k -1} + x p^2k \right) \\ &= x H^k + \frac{1}{{2m}} (-i \hbar) (2k) p p^{2(k-1)}/(2m)^{k-1} \\ &= x H^k - \frac{i \hbar k}{m} p H^{k-1}.\end{aligned}

This gives us

\begin{aligned}x_H &= x - \frac{i \hbar p }{m} \sum_{k,l=0}^\infty \frac{k}{k!} \frac{1}{{l!}}\left(\frac{i t}{\hbar}\right)^k H^{k-1 + l}\left(\frac{-i t}{\hbar}\right)^l \\ &= x - \frac{i \hbar p i t }{m \hbar} \end{aligned}

Or

\begin{aligned}x_H &= x + \frac{p t }{m} \end{aligned} \hspace{\stretch{1}}(2.5)

### Part II.

Express $x$ as an operator $x_H$ for $H = \mathbf{p}^2/2m + V$ with $V = \lambda x^m$.

In retrospect, for the first part of this problem, it would have been better to use the series expansion for this exponential sandwich

Or, in explicit form

\begin{aligned}e^A B e^{-A}&=B + \frac{1}{{1!}} \left[{A},{B}\right]+ \frac{1}{{2!}} \left[{A},{\left[{A},{B}\right]}\right]+ \cdots\end{aligned} \hspace{\stretch{1}}(2.6)

Doing so, we’d find for the first commutator

\begin{aligned}\frac{i t}{2m \hbar} \left[{\mathbf{p}^2},{x}\right] = \frac{t p}{m},\end{aligned} \hspace{\stretch{1}}(2.7)

so that the series has only the first two terms, and we’d obtain the same result. That seems like a logical approach to try here too. For the first commutator, we get the same $tp/m$ result since $\left[{V},{x}\right] = 0$.

Employing

\begin{aligned}x^n p = i \hbar n x^{n-1} + p x^n,\end{aligned} \hspace{\stretch{1}}(2.8)

I find

\begin{aligned}\left( \frac{i t}{\hbar} \right)^2 \left[{H},{\left[{H},{x}\right]}\right] &= \frac{i \lambda t^2}{\hbar m } \left[{x^n},{p}\right] \\ &= - \frac{n t^2 \lambda}{m} x^{n-1} \\ &= - \frac{n t^2 V}{m x} \\ \end{aligned}

The triple commutator gets no prettier, and I get

\begin{aligned}\left( \frac{i t}{\hbar} \right)^3 \left[{H},{ \left[{H},{ \left[H, x\right] }\right] }\right]&= \frac{it}{\hbar} \left[{ \frac{\mathbf{p}^2}{2m} + \lambda x^n},{ - \frac{n t^2 V}{m x} }\right] \\ &= -\frac{it}{\hbar} \frac{n t^2 }{m } \frac{\lambda}{2m} \left[{\mathbf{p}^2},{ x^{n-1}}\right] \\ &= \cdots \\ &= \frac{n(n-1)t^3 V}{ 2 m^2 x^3 } (i \hbar n + 2 p x).\end{aligned}

Putting all the pieces together this gives

\begin{aligned}x_H =e^{iH t/\hbar} x e^{-iH t/\hbar} &= x + \frac{tp}{m} - \frac{n t^2 V}{ 2 m x} + \frac{n(n-1)t^3 V}{ 12 m^2 x^3 } (i \hbar n + 2 p x) + \cdots\end{aligned} \hspace{\stretch{1}}(2.9)

If there is a closed form for this it isn’t obvious to me. Would a fixed lower degree potential function shed any more light on this. How about the Harmonic oscillator Hamiltonian

\begin{aligned}H = \frac{p^2}{2m} + \frac{m \omega^2 }{2} x^2\end{aligned} \hspace{\stretch{1}}(2.10)

… this one works out nicely since there’s an even-odd alternation.

Get

\begin{aligned}x_H = x \cos (\omega^2 t^2 /2) + \frac{ p t }{m} \frac{\sin( \omega^2 t^2/2)}{ \omega^2 t^2/2 }\end{aligned} \hspace{\stretch{1}}(2.11)

I’d not expect such a tidy result for an arbitrary $V(x) = \lambda x^n$ potential.

## Problem 4. Feynman-Hellman relation.

For continuously parametrized eigenstate, eigenvalue and Hamiltonian ${\lvert {\psi(\lambda)} \rangle}$, $E(\lambda)$ and $H(\lambda)$ respectively, we can relate the derivatives

\begin{aligned}\frac{\partial {}}{\partial {\lambda}} ( H {\lvert {\psi} \rangle} ) &= \frac{\partial {}}{\partial {\lambda}} ( E {\lvert {\psi} \rangle} ) \\ \implies \\ \frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +H \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= \frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E \frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \end{aligned}

Left multiplication by ${\langle {\psi} \rvert}$ gives

\begin{aligned}{\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +{\langle {\psi} \rvert}H \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= {\langle {\psi} \rvert}\frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E {\langle {\psi} \rvert}\frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \\ \implies \\ {\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +({\langle {\psi} \rvert}E) \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= {\langle {\psi} \rvert}\frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E {\langle {\psi} \rvert}\frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \\ \implies \\ {\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} &= \frac{\partial {E}}{\partial {\lambda}} \left\langle{{\psi}} \vert {{\psi}}\right\rangle,\end{aligned}

which provides the desired identity

\begin{aligned}\frac{\partial {E}}{\partial {\lambda}} = {\langle {\psi(\lambda)} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi(\lambda)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.12)

## Problem 5.

### Description.

With eigenstates ${\lvert {\phi_1} \rangle}$ and ${\lvert {\phi_2} \rangle}$, of $H$ with eigenvalues $E_1$ and $E_2$, respectively, and

\begin{aligned}{\lvert {\chi_1} \rangle} &= \frac{1}{{\sqrt{2}}}( {\lvert {\phi_1} \rangle} +{\lvert {\phi_2} \rangle}) \\ {\lvert {\chi_2} \rangle} &= \frac{1}{{\sqrt{2}}}( {\lvert {\phi_1} \rangle} -{\lvert {\phi_2} \rangle})\end{aligned}

and ${\lvert {\psi(0)} \rangle} = {\lvert {\chi_1} \rangle}$, determine ${\lvert {\psi(t)} \rangle}$ in terms of ${\lvert {\phi_1} \rangle}$ and ${\lvert {\phi_2} \rangle}$.

### Solution.

\begin{aligned}{\lvert {\psi(t)} \rangle}&= e^{-i H t /\hbar} {\lvert {\psi(0)} \rangle} \\ &= e^{-i H t /\hbar} {\lvert {\chi_1} \rangle} \\ &= \frac{1}{{\sqrt{2}}} e^{-i H t /\hbar} ( {\lvert {\phi_1} \rangle} -{\lvert {\phi_2} \rangle}) \\ &= \frac{1}{{\sqrt{2}}} (e^{-i E_1 t /\hbar} {\lvert {\phi_1} \rangle} -e^{-i E_2 t /\hbar} {\lvert {\phi_2} \rangle} )\qquad\square\end{aligned}

## Problem 6.

### Description.

Consider a Coulomb like potential $-\lambda/r$ with angular momentum $l=0$. If the eigenfunction is

\begin{aligned}u(r) = u_0 e^{-\beta r}\end{aligned} \hspace{\stretch{1}}(2.13)

determine $u_0$, $\beta$, and the energy eigenvalue $E$ in terms of $\lambda$, and $m$.

### Solution.

We can start with the normalization constant $u_0$ by integrating

\begin{aligned}1 &= u_0^2 \int_0^\infty dr e^{-\beta r} e^{-\beta r} \\ &=u_0^2 \left. \frac{e^{-2 \beta r}}{-2 \beta} \right\vert_{0^\infty} \\ &= u_0^2 \frac{1}{{2\beta}} \\ \end{aligned}

\begin{aligned}u_0 &= \sqrt{2\beta}\end{aligned} \hspace{\stretch{1}}(2.14)

To go further, we need the Hamiltonian. Note that we can write the Laplacian with the angular momentum operator factored out using

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} \left( (\mathbf{x} \cdot \boldsymbol{\nabla})^2 + \mathbf{x} \cdot \boldsymbol{\nabla} + (\mathbf{x} \times \boldsymbol{\nabla})^2 \right)\end{aligned} \hspace{\stretch{1}}(2.15)

With zero for the angular momentum operator $\mathbf{x} \times \boldsymbol{\nabla}$, and switching to spherical coordinates, we have

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_r r \partial_r \\ &= \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_r+ \frac{1}{{r}} r \partial_{rr} \\ &= \frac{2}{r} \partial_r + \partial_{rr} \\ \end{aligned}

We can now write the Hamiltonian for the zero angular momentum case

\begin{aligned}H&= -\frac{\hbar^2}{2m} \left( \frac{2}{r} \partial_r + \partial_{rr} \right) - \frac{\lambda}{r}\end{aligned} \hspace{\stretch{1}}(2.16)

With application of this Hamiltonian to the eigenfunction we have

\begin{aligned}E u_0 e^{-\beta r} &=\left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} \partial_r + \partial_{rr} \right) - \frac{\lambda}{r} \right) u_0 e^{-\beta r} \\ &=\left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} (-\beta) + \beta^2 \right) - \frac{\lambda}{r} \right) u_0 e^{-\beta r} .\end{aligned}

In particular for $r = \infty$ we have

\begin{aligned}-\frac{\hbar^2 \beta^2 }{2m} &= E\end{aligned} \hspace{\stretch{1}}(2.17)

\begin{aligned}-\frac{\hbar^2 \beta^2 }{2m} &= \left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} (-\beta) + \beta^2 \right) - \frac{\lambda}{r} \right) \\ \implies \\ \frac{\hbar^2}{2m} \frac{2}{r} \beta &= \frac{\lambda}{r} \end{aligned}

Collecting all the results we have

\begin{aligned}\beta &= \frac{\lambda m}{\hbar^2} \\ E &= -\frac{\lambda^2 m}{2 \hbar^2} \\ u_0 &= \frac{\sqrt{2 \lambda m}}{\hbar}\end{aligned} \hspace{\stretch{1}}(2.18)

## Problem 7.

### Description.

A particle in a uniform field $\mathbf{E}_0$. Show that the expectation value of the position operator $\left\langle{\mathbf{r}}\right\rangle$ satisfies

\begin{aligned}m \frac{d^2 \left\langle{\mathbf{r}}\right\rangle }{dt^2} = e \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(2.21)

### Solution.

This follows from Ehrehfest’s theorem once we formulate the force $e \mathbf{E}_0 = -\boldsymbol{\nabla} \phi$, in terms of a potential $\phi$. That potential is

\begin{aligned}\phi = - e \mathbf{E}_0 \cdot (x,y,z)\end{aligned} \hspace{\stretch{1}}(2.22)

The Hamiltonian is therefore

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} - e \mathbf{E}_0 \cdot (x,y,z).\end{aligned} \hspace{\stretch{1}}(2.23)

Ehrehfest’s theorem gives us

\begin{aligned}\frac{d}{dt} \left\langle{{x_k}}\right\rangle &= \frac{1}{{m}} \left\langle{{p_k}}\right\rangle \\ \frac{d}{dt} \left\langle{{p_k}}\right\rangle &= -\left\langle{{ \frac{\partial {V}}{\partial {x_k}} }}\right\rangle,\end{aligned}

or

\begin{aligned}\frac{d^2}{dt^2} \left\langle{{x_k}}\right\rangle &= -\frac{1}{{m}} \left\langle{{ \frac{\partial {V}}{\partial {x_k}} }}\right\rangle.\end{aligned} \hspace{\stretch{1}}(2.24)

\begin{aligned}\frac{\partial {V}}{\partial {x_k}} &= - e (\mathbf{E}_0)_k\end{aligned}

Putting all the last bits together, and summing over the directions $\mathbf{e}_k$ we have

\begin{aligned}m \frac{d^2}{dt^2} \mathbf{e}_k \left\langle{{x_k}}\right\rangle = \mathbf{e}_k \left\langle{{ e (\mathbf{E}_0)_k }}\right\rangle= e \mathbf{E}_0\qquad\square\end{aligned}

## Problem 8.

### Description.

For Hamiltonian eigenstates ${\lvert {E_n} \rangle}$, $C = AB$, $A = \left[{B},{H}\right]$, obtain the matrix element ${\langle {E_m} \rvert} C {\lvert {E_n} \rangle}$ in terms of the matrix element of $A$.

### Solution.

I was able to get most of what was asked for here, with a small exception. I started with the matrix element for $A$, which is

\begin{aligned}{\langle {E_m} \rvert} A {\lvert {E_n} \rangle}={\langle {E_m} \rvert} BH - HB {\lvert {E_n} \rangle} =(E_n - E_m){\langle {E_m} \rvert} B {\lvert {E_n} \rangle} \end{aligned} \hspace{\stretch{1}}(2.25)

Next, computing the matrix element for $C$ we have

\begin{aligned}{\langle {E_m} \rvert} C {\lvert {E_n} \rangle}&={\langle {E_m} \rvert} BHB - HB^2 {\lvert {E_n} \rangle} \\ &=\sum_a {\langle {E_m} \rvert} BH {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} - E_m {\langle {E_m} \rvert} B {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a E_a {\langle {E_m} \rvert} B {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} -E_m {\langle {E_m} \rvert} B {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a (E_a - E_m){\langle {E_m} \rvert} B {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &={\langle {E_m} \rvert} A {\lvert {E_n} \rangle} {\langle {E_n} \rvert} B {\lvert {E_n} \rangle} +\sum_{a \ne n} {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &={\langle {E_m} \rvert} A {\lvert {E_n} \rangle} {\langle {E_n} \rvert} B {\lvert {E_n} \rangle} +\sum_{a \ne n} {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} \frac{{\langle {E_a} \rvert} A {\lvert {E_n} \rangle}}{E_n - E_a}\end{aligned}

Except for the ${\langle {E_n} \rvert} B {\lvert {E_n} \rangle}$ part of this expression, the problem as stated is complete. The relationship 2.25 is no help for with $n = m$, so I see no choice but to leave that small part of the expansion in terms of $B$.

## Problem 9.

### Description.

Operator $A$ has eigenstates ${\lvert {a_i} \rangle}$, with a unitary change of basis operation $U {\lvert {a_i} \rangle} = {\lvert {b_i} \rangle}$. Determine in terms of $U$, and $A$ the operator $B$ and its eigenvalues for which ${\lvert {b_i} \rangle}$ are eigenstates.

### Solution.

Consider for motivation the matrix element of $A$ in terms of ${\lvert {b_i} \rangle}$. We will also let $A {\lvert {a_i} \rangle} = \alpha_i {\lvert {a_i} \rangle}$. We then have

\begin{aligned}{\langle {a_i} \rvert} A {\lvert {a_j} \rangle}&={\langle {b_i} \rvert} U A U^\dagger {\lvert {b_j} \rangle} \\ \end{aligned}

We also have

\begin{aligned}{\langle {a_i} \rvert} A {\lvert {a_j} \rangle}&=a_j {\langle {a_i} \rvert} {\lvert {a_j} \rangle} \\ &=a_j \delta_{ij}\end{aligned}

So it appears that the operator $U A U^\dagger$ has the orthonormality relation required. In terms of action on the basis $\{{\lvert {b_i} \rangle}\}$, let’s see how it behaves. We have

\begin{aligned}U A U^\dagger {\lvert {b_i} \rangle}&= U A {\lvert {a_i} \rangle} \\ &= U \alpha_i {\lvert {a_i} \rangle} \\ &= \alpha_i {\lvert {b_i} \rangle} \\ \end{aligned}

So we see that the operators $A$ and $B = U A U^\dagger$ have common eigenvalues.

## Problem 10.

### Description.

With $H {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}$, $A = \left[{H},{F}\right]$ and ${\langle {0} \rvert} F {\lvert {0} \rangle} = 0$, show that

\begin{aligned}\sum_{n\ne 0} \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle} }{E_n - E_0} = {\langle {0} \rvert} AF {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(2.26)

### Solution.

\begin{aligned}{\langle {0} \rvert} AF {\lvert {0} \rangle}&={\langle {0} \rvert} HF F - FH F{\lvert {0} \rangle} \\ &=\sum_n E_0 {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} - E_n {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F{\lvert {0} \rangle} \\ &=\sum_n (E_0 -E_n) {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} \\ &=\sum_{n\ne0} (E_0 -E_n) {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} \\ \end{aligned}

We also have

\begin{aligned}{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}&={\langle {0} \rvert} HF -F H {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle} \\ &=(E_0 - E_n) {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} HF - FH {\lvert {0} \rangle} \\ &=-(E_0 - E_n)^2 {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F {\lvert {0} \rangle} \\ \end{aligned}

Or, for $n \ne 0$,

\begin{aligned}{\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F {\lvert {0} \rangle} &=-\frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{(E_0 - E_n)^2 }.\end{aligned}

This gives

\begin{aligned}{\langle {0} \rvert} AF {\lvert {0} \rangle}&=-\sum_{n\ne0} (E_0 -E_n) \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{(E_0 - E_n)^2 } \\ &=\sum_{n\ne0} \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{E_n - E_0 } \qquad\square\end{aligned}

## Problem 11. commutator of angular momentum with Hamiltonian.

Show that $\left[{\mathbf{L}},{H}\right] = 0$, where $H = \mathbf{p}^2/2m + V(r)$.

This follows by considering $\left[{\mathbf{L}},{\mathbf{p}^2}\right]$, and $\left[{\mathbf{L}},{V(r)}\right]$. Let

\begin{aligned}L_{jk} = x_j p_k - x_k p_j,\end{aligned} \hspace{\stretch{1}}(2.27)

so that

\begin{aligned}\mathbf{L} = \mathbf{e}_i \epsilon_{ijk} L_{jk}.\end{aligned} \hspace{\stretch{1}}(2.28)

We now need to consider the commutators of the operators $L_{jk}$ with $\mathbf{p}^2$ and $V(r)$.

Let’s start with $p^2$. In particular

\begin{aligned}\mathbf{p}^2 x_m p_n&=p_k p_k x_m p_n \\ &=p_k (p_k x_m) p_n \\ &=p_k (-i\hbar \delta_{km} + x_m p_k) p_n \\ &=-i\hbar p_m p_n + (p_k x_m) p_k p_n \\ &=-i\hbar p_m p_n + (-i \hbar \delta_{km} + x_m p_k ) p_k p_n \\ &=-2 i\hbar p_m p_n + x_m p_n \mathbf{p}^2.\end{aligned}

So our commutator with $\mathbf{p}^2$ is

\begin{aligned}\left[{L_{jk}},{\mathbf{p}^2}\right]&=(x_j p_k - x_j p_k) \mathbf{p}^2 -( -2 i\hbar p_j p_k + x_j p_k \mathbf{p}^2 +2 i\hbar p_k p_j - x_k p_j \mathbf{p}^2 ).\end{aligned}

Since $p_j p_k = p_k p_j$, all terms cancel out, and the problem is reduced to showing that

\begin{aligned}\left[{\mathbf{L}},{H}\right] &= \left[{\mathbf{L}},{V(r)}\right] = 0.\end{aligned}

Now assume that $V(r)$ has a series representation

\begin{aligned}V(r) &= \sum_j a_j r^j = \sum_j a_j (x_k x_k)^{j/2}\end{aligned}

We’d like to consider the action of $x_m p_n$ on this function

\begin{aligned}x_m p_n V(r) \Psi&= -i \hbar x_m \sum_j a_j \partial_n (x_k x_k)^{j/2} \Psi \\ &= -i \hbar x_m \sum_j a_j (j x_n (x_k x_k)^{j/2-1} + r^j \partial_n \Psi) \\ &= -\frac{i \hbar x_m x_n}{r^2} \sum_j a_j j r^j + x_m V(r) p_n \Psi\end{aligned}

\begin{aligned}L_{mn} V(r) &=(x_m p_n - x_n p_m) V(r) \\ &= -\frac{i \hbar x_m x_n}{r^2} \sum_j a_j j r^j+\frac{i \hbar x_n x_m}{r^2} \sum_j a_j j r^j + V(r) (x_m p_n - x_n p_m )\\ &= V(r) L_{mn}\end{aligned}

Thus $\left[{L_{mn}},{V(r)}\right] = 0$ as expected, implying $\left[{\mathbf{L}},{H}\right] = 0$.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.