Peeter Joot's Blog.

Math, physics, perl, and programming obscurity.

 bubba transvere on My letter to the Ontario Energ… Kuba Ober on New faucet installation peeterjoot on Ease of screwing up C string… peeterjoot on Basement electrical now d… peeterjoot on New faucet installation
• People not reading this blog: 6,973,738,433 minus:

• 136,207 hits

Posts Tagged ‘schrodinger picture’

PHY456H1F: Quantum Mechanics II. Lecture 6 (Taught by Prof J.E. Sipe). Interaction picture.

Posted by peeterjoot on September 27, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Interaction picture.

Recap.

Recall our table comparing our two interaction pictures

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

A motivating example.

While fundamental Hamiltonians are independent of time, in a number of common cases, we can form approximate Hamiltonians that are time dependent. One such example is that of Coulomb excitations of an atom, as covered in section 18.3 of the text [1], and shown in figure (\ref{fig:qmTwoL6fig1}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig1}
\caption{Coulomb interaction of a nucleus and heavy atom.}
\end{figure}

We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be considered classically. From the atoms point of view, the effects of the heavy nucleus barreling by can be described using a time dependent Hamiltonian. For the atom, that interaction Hamiltonian is

\begin{aligned}H' = \sum_i \frac{ Z e q_i }{{\left\lvert{\mathbf{r}_N(t) - \mathbf{R}_i}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here and $\mathbf{r}_N$ is the position vector for the heavy nucleus, and $\mathbf{R}_i$ is the position to each charge within the atom, where $i$ ranges over all the internal charges, positive and negative, within the atom.

Placing the origin close to the atom, we can write this interaction Hamiltonian as

\begin{aligned}H'(t) = \not{{\sum_i \frac{Z e q_i}{{\left\lvert{\mathbf{r}_N(t)}\right\rvert}}}}+ \sum_i Z e q_i \mathbf{R}_i \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{r}}} \frac{1}{{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{r} = 0}}\end{aligned} \hspace{\stretch{1}}(2.2)

The first term vanishes because the total charge in our neutral atom is zero. This leaves us with

\begin{aligned}\begin{aligned}H'(t) &= -\sum_i q_i \mathbf{R}_i \cdot {\left.{{\left(-\frac{\partial {}}{\partial {\mathbf{r}}} \frac{ Z e}{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}\right)}}\right\vert}_{{\mathbf{r} = 0}} \\ &= - \sum_i q_i \mathbf{R}_i \cdot \mathbf{E}(t),\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where $\mathbf{E}(t)$ is the electric field at the origin due to the nucleus.

Introducing a dipole moment operator for the atom

\begin{aligned}\boldsymbol{\mu} = \sum_i q_i \mathbf{R}_i,\end{aligned} \hspace{\stretch{1}}(2.4)

the interaction takes the form

\begin{aligned}H'(t) = -\boldsymbol{\mu} \cdot \mathbf{E}(t).\end{aligned} \hspace{\stretch{1}}(2.5)

Here we have a quantum mechanical operator, and a classical field taken together. This sort of dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated classically as depicted in figure (\ref{fig:qmTwoL6fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig2}
\caption{atom in a field}
\end{figure}

In the figure, we can use the dipole interaction, provided $\lambda \gg a$, where $a$ is the “width” of the atom.

Because it is great for examples, we will see this dipole interaction a lot.

The interaction picture.

Having talked about both the Schr\”{o}dinger and Heisenberg pictures, we can now move on to describe a hybrid, one where our Hamiltonian has been split into static and time dependent parts

\begin{aligned}H(t) = H_0 + H'(t)\end{aligned} \hspace{\stretch{1}}(2.6)

We will formulate an approach for dealing with problems of this sort called the interaction picture.

This is also covered in section 3.3 of the text, albeit in a much harder to understand fashion (the text appears to try to not pull the result from a magic hat, but the steps to get to the end result are messy). It would probably have been nicer to see it this way instead.

In the Schr\”{o}dinger picture our dynamics have the form

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.7)

How about the Heisenberg picture? We look for a solution

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.8)

We want to find this operator that evolves the state from the state as some initial time $t_0$, to the arbitrary later state found at time $t$. Plugging in we have

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}=H(t) U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.9)

This has to hold for all ${\lvert {\psi_s(t_0)} \rangle}$, and we can equivalently seek a solution of the operator equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) = H(t) U(t, t_0),\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}U(t_0, t_0) = I,\end{aligned} \hspace{\stretch{1}}(2.11)

the identity for the Hilbert space.

Suppose that $H(t)$ was independent of time. We could find that

\begin{aligned}U(t, t_0) = e^{-i H(t - t_0)/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.12)

If $H(t)$ depends on time could you guess that

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}\end{aligned} \hspace{\stretch{1}}(2.13)

holds? No. This may be true when $H(t)$ is a number, but when it is an operator, the Hamiltonian does not necessarily commute with itself at different times

\begin{aligned}\left[{H(t')},{H(t'')}\right] \ne 0.\end{aligned} \hspace{\stretch{1}}(2.14)

So this is wrong in general. As an aside, for numbers, 2.13 can be verified easily. We have

\begin{aligned}i \hbar \left( e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \right)'&=i \hbar \left( -\frac{i}{\hbar} \right) \left( \int_{t_0}^t H(\tau) d\tau \right)'e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau } \\ &=\left( H(t) \frac{dt}{dt} - H(t_0) \frac{dt_0}{dt} \right)e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \\ &= H(t) U(t, t_0)\end{aligned}

Expectations

Suppose that we do find $U(t, t_0)$. Then our expectation takes the form

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} = {\langle {\psi_s(t_0)} \rvert} U^\dagger(t, t_0) O_s U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \end{aligned} \hspace{\stretch{1}}(2.15)

Put

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.16)

and form

\begin{aligned}O_H = U^\dagger(t, t_0) O_s U(t, t_0) \end{aligned} \hspace{\stretch{1}}(2.17)

so that our expectation has the familiar representations

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \end{aligned} \hspace{\stretch{1}}(2.18)

New strategy. Interaction picture.

Let’s define

\begin{aligned}U_I(t, t_0) = e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\end{aligned} \hspace{\stretch{1}}(2.19)

or

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.20)

Let’s see how this works. We have

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} &= i \hbar \frac{d{{}}}{dt} \left(e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( i \hbar \frac{d{{}}}{dt} U(t, t_0) \right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( (H + H'(t)) U(t, t_0) \right) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) U(t, t_0) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned}

Define

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.21)

so that our operator equation takes the form

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) = \bar{H}'(t) U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.22)

Note that we also have the required identity at the initial time

\begin{aligned}U_I(t_0, t_0) = I.\end{aligned} \hspace{\stretch{1}}(2.23)

Without requiring us to actually find $U(t, t_0)$ all of the dynamics of the time dependent interaction are now embedded in our operator equation for $\bar{H}'(t)$, with all of the simple interaction related to the non time dependent portions of the Hamiltonian left separate.

Connection with the Schr\”{o}dinger picture.

In the Schr\”{o}dinger picture we have

\begin{aligned}{\lvert {\psi_s(t)} \rangle} &= U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \\ &=e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0){\lvert {\psi_s(t_0)} \rangle}.\end{aligned}

With a definition of the interaction picture ket as

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle} = U_I(t, t_0) {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.24)

the Schr\”{o}dinger picture is then related to the interaction picture by

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = e^{-\frac{i}{\hbar} H_0(t - t_0)} {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.25)

Also, by multiplying 2.22 by our Schr\”{o}dinger ket, we remove the last vestiges of $U_I$ and $U$ from the dynamical equation for our time dependent interaction

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.26)

Interaction picture expectation.

Inverting 2.25, we can form an operator expectation, and relate it the interaction and Schr\”{o}dinger pictures

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)}{\lvert {\psi_I} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.27)

With a definition

\begin{aligned}O_I =e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.28)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.29)

As before, the time evolution of our interaction picture operator, can be found by taking derivatives of 2.28, for which we find

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.30)

Summarizing the interaction picture.

Given

\begin{aligned}H(t) = H_0 + H'(t),\end{aligned} \hspace{\stretch{1}}(2.31)

and initial time states

\begin{aligned}{\lvert {\psi_I(t_0)} \rangle} ={\lvert {\psi_s(t_0)} \rangle} = {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.32)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.33)

where

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.34)

and

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.35)

or

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) &= \bar{H}'(t) U_I(t, t_0) \\ U_I(t_0, t_0) &= I.\end{aligned} \hspace{\stretch{1}}(2.36)

Our interaction picture Hamiltonian is

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.38)

and for Schr\”{o}dinger operators, independent of time, we have the dynamical equation

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.39)

Justifying the Taylor expansion above (not class notes).

Multivariable Taylor series

As outlined in section 2.8 ($8.10$) of [2], we want to derive the multi-variable Taylor expansion for a scalar valued function of some number of variables

\begin{aligned}f(\mathbf{u}) = f(u^1, u^2, \cdots),\end{aligned} \hspace{\stretch{1}}(3.40)

consider the displacement operation applied to the vector argument

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{f(\mathbf{a} + t \mathbf{x})}}\right\vert}_{{t=1}}.\end{aligned} \hspace{\stretch{1}}(3.41)

We can Taylor expand a single variable function without any trouble, so introduce

\begin{aligned}g(t) = f(\mathbf{a} + t \mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.42)

where

\begin{aligned}g(1) = f(\mathbf{a} + \mathbf{x}).\end{aligned} \hspace{\stretch{1}}(3.43)

We have

\begin{aligned}g(t) = g(0) + t {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{t^2}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots,\end{aligned} \hspace{\stretch{1}}(3.44)

so that

\begin{aligned}g(1) = g(0) + + {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{1}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots.\end{aligned} \hspace{\stretch{1}}(3.45)

The multivariable Taylor series now becomes a plain old application of the chain rule, where we have to evaluate

\begin{aligned}\frac{dg}{dt} &= \frac{d{{}}}{dt} f(a^1 + t x^1, a^2 + t x^2, \cdots) \\ &= \sum_i \frac{\partial {}}{\partial {(a^i + t x^i)}} f(\mathbf{a} + t \mathbf{x}) \frac{\partial {a^i + t x^i}}{\partial {t}},\end{aligned}

so that

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}}= \sum_i x^i \left( {\left.{{ \frac{\partial {f}}{\partial {x^i}}}}\right\vert}_{{x^i = a^i}}\right).\end{aligned} \hspace{\stretch{1}}(3.46)

Assuming an Euclidean space we can write this in the notationally more pleasant fashion using a gradient operator for the space

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}} = {\left.{{\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.47)

To handle the higher order terms, we repeat the chain rule application, yielding for example

\begin{aligned}{\left.{{\frac{d^2 f(\mathbf{a} + t \mathbf{x})}{dt^2} }}\right\vert}_{{t=0}} &={\left.{{\frac{d{{}}}{dt} \sum_i x^i \frac{\partial {f(\mathbf{a} + t \mathbf{x})}}{\partial {(a^i + t x^i)}} }}\right\vert}_{{t=0}}\\ &={\left.{{\sum_i x^i \frac{\partial {}}{\partial {(a^i + t x^i)}} \frac{d{{f(\mathbf{a} + t \mathbf{x})}}}{dt}}}\right\vert}_{{t=0}} \\ &={\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^2 f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned}

Thus the Taylor series associated with a vector displacement takes the tidy form

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} {\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^k f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.48)

Even more fancy, we can form the operator equation

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{e^{ \mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} } f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}\end{aligned} \hspace{\stretch{1}}(3.49)

Here a dummy variable $\mathbf{u}$ has been retained as an instruction not to differentiate the $\mathbf{x}$ part of the directional derivative in any repeated applications of the $\mathbf{x} \cdot \boldsymbol{\nabla}$ operator.

That notational cludge can be removed by swapping $\mathbf{a}$ and $\mathbf{x}$

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} (\mathbf{a} \cdot \boldsymbol{\nabla})^k f(\mathbf{x})=e^{ \mathbf{a} \cdot \boldsymbol{\nabla} } f(\mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.50)

where $\boldsymbol{\nabla} = \boldsymbol{\nabla}_{\mathbf{x}} = ({\partial {}}/{\partial {x^1}}, {\partial {}}/{\partial {x^2}}, ...)$.

Having derived this (or for those with lesser degrees of amnesia, recall it), we can see that 2.2 was a direct application of this, retaining no second order or higher terms.

Our expression used in the interaction Hamiltonian discussion was

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{R} = 0}}.\end{aligned} \hspace{\stretch{1}}(3.51)

which we can see has the same structure as above with some variable substitutions. Evaluating it we have

\begin{aligned}\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}&=\mathbf{e}_i \frac{\partial {}}{\partial {R^i}} ((x^j - R^j)^2)^{-1/2} \\ &=\mathbf{e}_i \left(-\frac{1}{{2}}\right) 2 (x^j - R^j) \frac{\partial {(x^j - R^j)}}{\partial {R^i}} \frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3}} \\ &= \frac{\mathbf{r} - \mathbf{R}}{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3} ,\end{aligned}

and at $\mathbf{R} = 0$ we have

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3}.\end{aligned} \hspace{\stretch{1}}(3.52)

We see in this direction derivative produces the classical electric Coulomb field expression for an electrostatic distribution, once we take the $\mathbf{r}/{\left\lvert{\mathbf{r}}\right\rvert}^3$ and multiply it with the $- Z e$ factor.

With algebra.

A different way to justify the expansion of 2.2 is to consider a Clifford algebra factorization (following notation from [3]) of the absolute vector difference, where $\mathbf{R}$ is considered small.

\begin{aligned}{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}&= \sqrt{ \left(\mathbf{r} - \mathbf{R}\right) \left(\mathbf{r} - \mathbf{R}\right) } \\ &= \sqrt{ \left\langle{{\mathbf{r} \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) \mathbf{r}}}\right\rangle } \\ &= \sqrt{ \left\langle{{\mathbf{r}^2 \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) }}\right\rangle } \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \left\langle{{\frac{1}{\mathbf{r}} \mathbf{R} \mathbf{R} \frac{1}{\mathbf{r}}}}\right\rangle} \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \frac{\mathbf{R}^2}{\mathbf{r}^2}}\end{aligned}

Neglecting the $\mathbf{R}^2$ term, we can then Taylor series expand this scalar expression

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} \left( 1 + \frac{1}{\mathbf{r}} \cdot \mathbf{R}\right) =\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\hat{\mathbf{r}}}{\mathbf{r}^2} \cdot \mathbf{R}=\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3} \cdot \mathbf{R}.\end{aligned} \hspace{\stretch{1}}(3.53)

Observe this is what was found with the multivariable Taylor series expansion too.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[3] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

PHY456H1F: Quantum Mechanics II. Lecture 5 (Taught by Prof J.E. Sipe). Pertubation theory and degeneracy. Review of dynamics

Posted by peeterjoot on September 26, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Issues concerning degeneracy.

When the perturbed state is non-degenerate.

Suppose the state of interest is non-degenerate but others are

FIXME: diagram. states designated by dashes labeled $n1$, $n2$, $n3$ degeneracy $\alpha = 3$ for energy $E_n^{(0)}$.

This is no problem except for notation, and if the analysis is repeated we find

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.1)

where

\begin{aligned}{H_{m \alpha ; s}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

When the perturbed state is also degenerate.

FIXME: diagram. states designated by dashes labeled $n1$, $n2$, $n3$ degeneracy $\alpha = 3$ for energy $E_n^{(0)}$, and states designated by dashes labeled $s1$, $s2$, $s3$ degeneracy $\alpha = 3$ for energy $E_s^{(0)}$.

If we just blindly repeat the derivation for the non-degenerate case we would obtain

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{s1 ; s1}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \lambda^2 \sum_{\alpha \ne 1} \frac{{\left\lvert{{H_{s \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - {red} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \lambda\sum_{\alpha \ne s1} \frac{{H_{s \alpha ; s1}}'}{ E_s^{(0)} - {red} } {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}{H_{m \alpha ; s1}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.6)

Note that the $E_s^{(0)} -{red}$ is NOT a typo, and why we run into trouble. There is one case where a perturbation approach is still possible. That case is if we happen to have

\begin{aligned}{\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.7)

That may not be obvious, but if one returns to the original derivation, the right terms cancel so that one will not end up with the $0/0$ problem.

FIXME: do this derivation.

Diagonalizing the perturbation Hamiltonian.

Suppose that we do not have this special zero condition that allows the perturbation treatment to remain valid. What can we do. It turns out that we can make use of the fact that the perturbation Hamiltonian is Hermitian, and diagonalize the matrix

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.8)

In the example of a two fold degeneracy, this amounts to us choosing not to work with the states

\begin{aligned}{\lvert {\psi_{s1}^{(0)}} \rangle}, {\lvert {\psi_{s2}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.9)

both some linear combinations of the two

\begin{aligned}{\lvert {\psi_{sI}^{(0)}} \rangle} &= a_1 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_1 {\lvert {\psi_{s2}^{(0)}} \rangle} \\ {\lvert {\psi_{sII}^{(0)}} \rangle} &= a_2 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_2 {\lvert {\psi_{s2}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.10)

In this new basis, once found, we have

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} = \mathcal{H}_\alpha \delta_{\alpha \beta}\end{aligned} \hspace{\stretch{1}}(2.12)

Utilizing this to fix the previous, one would get if the analysis was repeated correctly

\begin{aligned}E_{s\alpha} &= E_s^{(0)} + \lambda {H_{s\alpha ; s\alpha}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \beta ; s \alpha}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_{s \alpha}} \rangle} &= {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \beta} \frac{{H_{m \beta ; s \alpha}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \beta}}^{(0)}} \rangle}+ \cdots.\end{aligned} \hspace{\stretch{1}}(2.13)

We see that a degenerate state can be split by applying perturbation.

FIXME: do this derivation.

FIXME: diagram. $E_s^{(0)}$ as one energy level without perturbation, and as two distinct levels with perturbation.

guess I’ll bet that this is the origin of the spectral line splitting, especially given that an atom like hydrogen has degenerate states.

Review of dynamics.

We want to move on to time dependent problems. In general for a time dependent problem, the answer follows provided one has solved for all the perturbed energy eigenvalues. This can be laborious (or not feasible due to infinite sums).

Before doing this, let’s review our dynamics as covered in section 3 of the text [1].

Schr\”{o}dinger and Heisenberg pictures

Our operator equation in the Schr\”{o}dinger picture is the familiar

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.15)

and most of our operators $X, P, \cdots$ are time independent.

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(t)} \rvert} O_s{\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.16)

where $O_s$ is the operator in the Schr\”{o}dinger picture, and is non time dependent.

Formally, the time evolution of any state is given by

\begin{aligned}{\lvert {\psi_s(t)} \rangle}e^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle} = U(t, 0) {\lvert {\psi_s(0)} \rangle} \end{aligned} \hspace{\stretch{1}}(3.17)

so the expectation of an operator can be written

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(0)} \rvert} e^{i H t/\hbar}O_se^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.18)

With the introduction of the Heisenberg ket

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(0)} \rangle},\end{aligned} \hspace{\stretch{1}}(3.19)

and Heisenberg operators

\begin{aligned}O_H = e^{i H t/\hbar} O_s e^{-i H t/\hbar},\end{aligned} \hspace{\stretch{1}}(3.20)

the expectation evolution takes the form

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_H} \rvert} O_H{\lvert {\psi_H} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.21)

Note that because the Hamiltonian commutes with it’s exponential (it commutes with itself and any power series of itself), the Hamiltonian in the Heisenberg picture is the same as in the Schr\”{o}dinger picture

\begin{aligned}H_H = e^{i H t/\hbar} H e^{-i H t/\hbar} = H.\end{aligned} \hspace{\stretch{1}}(3.22)

Time evolution and the Commutator

Taking the derivative of 3.20 provides us with the time evolution of any operator in the Heisenberg picture

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) &=i \hbar \frac{d}{dt} \left( e^{i H t/\hbar} O_s e^{-i H t/\hbar}\right) \\ &=i \hbar \left( \frac{i H}{\hbar} e^{i H t/\hbar} O_s e^{-i H t/\hbar}+e^{i H t/\hbar} O_s e^{-i H t/\hbar} \frac{-i H}{\hbar} \right) \\ &=\left( -H O_H+O_H H\right).\end{aligned}

We can write this as a commutator

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right].\end{aligned} \hspace{\stretch{1}}(3.23)

Summarizing the two pictures.

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

PHY356F: Quantum Mechanics I. Lecture 11 notes. Harmonic Oscillator.

Posted by peeterjoot on November 30, 2010

Setup.

Why study this problem?

It is relevant to describing the oscillation of molecules, quantum states of light, vibrations of the lattice structure of a solid, and so on.

FIXME: projected picture of masses on springs, with a ladle shaped well, approximately Harmonic about the minimum of the bucket.

The problem to solve is the one dimensional Hamiltonian

\begin{aligned}V(X) &= \frac{1}{{2}} K X^2 \\ K &= m \omega^2 \\ H &= \frac{P^2}{2m} + V(X)\end{aligned} \hspace{\stretch{1}}(8.168)

where $m$ is the mass, $\omega$ is the frequency, $X$ is the position operator, and $P$ is the momentum operator. Of these quantities, $\omega$ and $m$ are classical quantities.

This problem can be used to illustrate some of the reasons why we study the different pictures (Heisenberg, Interaction and Schr\”{o}dinger). This is a problem well suited to all of these (FIXME: lookup an example of this with the interaction picture. The book covers H and S methods.

We attack this with a non-intuitive, but cool technique. Introduce the raising $a^\dagger$ and lowering $a$ operators:

\begin{aligned}a &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X + i \frac{P}{m\omega} \right) \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X - i \frac{P}{m\omega} \right)\end{aligned} \hspace{\stretch{1}}(8.171)

\paragraph{Question:} are we using the dagger for more than Hermitian conjugation in this case.
\paragraph{Answer:} No, this is precisely the Hermitian conjugation operation.

Solving for $X$ and $P$ in terms of $a$ and $a^\dagger$, we have

\begin{aligned}a + a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 X \\ a - a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 i \frac{P }{m \omega}\end{aligned}

or

\begin{aligned}X &= \sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \\ P &= i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\end{aligned} \hspace{\stretch{1}}(8.173)

Express $H$ in terms of $a$ and $a^\dagger$

\begin{aligned}H &= \frac{P^2}{2m} + \frac{1}{{2}} K X^2 \\ &= \frac{1}{2m} \left(i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\right)^2+ \frac{1}{{2}} m \omega^2\left(\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \right)^2 \\ &= \frac{-\hbar \omega}{4} \left(a^\dagger a^\dagger + a^2 - a a^\dagger - a^\dagger a\right)+ \frac{\hbar \omega}{4}\left(a^\dagger a^\dagger + a^2 + a a^\dagger + a^\dagger a\right) \\ \end{aligned}

\begin{aligned}H= \frac{\hbar \omega}{2} \left(a a^\dagger + a^\dagger a\right) = \frac{\hbar \omega}{2} \left(2 a^\dagger a + \left[{a},{a^\dagger}\right]\right) \end{aligned} \hspace{\stretch{1}}(8.175)

Since $\left[{X},{P}\right] = i \hbar \mathbf{1}$ then we can show that $\left[{a},{a^\dagger}\right] = \mathbf{1}$. Solve for $\left[{a},{a^\dagger}\right]$ as follows

\begin{aligned}i \hbar &=\left[{X},{P}\right] \\ &=\left[{\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) },{i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)}\right] \\ &=\sqrt{\frac{\hbar}{2 m \omega}} i \sqrt{\frac{\hbar m \omega}{2}} \left[{a^\dagger + a},{a^\dagger -a}\right] \\ &= \frac{i \hbar}{2}\left(\left[{a^\dagger},{a^\dagger}\right] -\left[{a^\dagger},{a}\right] +\left[{a},{a^\dagger}\right] -\left[{a},{a}\right] \right) \\ &= \frac{i \hbar}{2}\left(0+2 \left[{a},{a^\dagger}\right] -0\right)\end{aligned}

Comparing LHS and RHS we have as stated

\begin{aligned}\left[{a},{a^\dagger}\right] = \mathbf{1}\end{aligned} \hspace{\stretch{1}}(8.176)

and thus from 8.175 we have

\begin{aligned}H = \hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right)\end{aligned} \hspace{\stretch{1}}(8.177)

Let ${\lvert {n} \rangle}$ be the eigenstate of $H$ so that $H{\lvert {n} \rangle} = E_n {\lvert {n} \rangle}$. From 8.177 we have

\begin{aligned}H {\lvert {n} \rangle} =\hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.178)

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} + \frac{{\lvert {n} \rangle}}{2} = \frac{E_n}{\hbar \omega} {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.179)

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.180)

We wish now to find the eigenstates of the “Number” operator $a^\dagger a$, which are simultaneously eigenstates of the Hamiltonian operator.

Observe that we have

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger ( a a^\dagger {\lvert {n} \rangle} ) \\ &= a^\dagger ( \mathbf{1} + a^\dagger a ) {\lvert {n} \rangle}\end{aligned}

where we used $\left[{a},{a^\dagger}\right] = a a^\dagger - a^\dagger a = \mathbf{1}$.

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger \left( \mathbf{1} + \frac{E_n}{\hbar\omega} - \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle} \\ &= a^\dagger \left( \frac{E_n}{\hbar\omega} + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle},\end{aligned}

or

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) = (\lambda_n + 1) (a^\dagger {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.181)

The new state $a^\dagger {\lvert {n} \rangle}$ is presumed to lie in the same space, expressible as a linear combination of the basis states in this space. We can see the effect of the operator $a a^\dagger$ on this new state, we find that the energy is changed, but the state is otherwise unchanged. Any state $a^\dagger {\lvert {n} \rangle}$ is an eigenstate of $a^\dagger a$, and therefore also an eigenstate of the Hamiltonian.

Play the same game and win big by discovering that

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} ) = (\lambda_n -1) (a {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.182)

There will be some state ${\lvert {0} \rangle}$ such that

\begin{aligned}a {\lvert {0} \rangle} = 0 {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(8.183)

which implies

\begin{aligned}a^\dagger (a {\lvert {0} \rangle}) = (a^\dagger a) {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(8.184)

so from 8.180 we have

\begin{aligned}\lambda_0 = 0\end{aligned} \hspace{\stretch{1}}(8.185)

Observe that we can identify $\lambda_n = n$ for

\begin{aligned}\lambda_n = \left( \frac{E_n}{\hbar\omega} - \frac{1}{{2}} \right) = n,\end{aligned} \hspace{\stretch{1}}(8.186)

or

\begin{aligned}\frac{E_n}{\hbar\omega} = n + \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(8.187)

or

\begin{aligned}E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(8.188)

where $n = 0, 1, 2, \cdots$.

We can write

\begin{aligned}\hbar \omega \left( a^\dagger a + \frac{1}{{2}} \mathbf{1} \right) {\lvert {n} \rangle} &= E_n {\lvert {n} \rangle} \\ a^\dagger a {\lvert {n} \rangle} + \frac{1}{{2}} {\lvert {n} \rangle} &= \frac{E_n}{\hbar \omega} {\lvert {n} \rangle} \\ \end{aligned}

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.189)

We call this operator $a^\dagger a = N$, the number operator, so that

\begin{aligned}N {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.190)

Relating states.

Recall the calculation we performed for

\begin{aligned}L_{+} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m+1} \rangle} \\ L_{-} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.191)

Where $C_{+}$, and $C_{+}$ are constants. The next game we are going to play is to work out $C_n$ for the lowering operation

\begin{aligned}a{\lvert {n} \rangle} = C_n {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.193)

and the raising operation

\begin{aligned}a^\dagger {\lvert {n} \rangle} = B_n {\lvert {n+1} \rangle}.\end{aligned} \hspace{\stretch{1}}(9.194)

For the Hermitian conjugate of $a {\lvert {n} \rangle}$ we have

\begin{aligned}(a {\lvert {n} \rangle})^\dagger = ( C_n {\lvert {n-1} \rangle} )^\dagger = C_n^{*} {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.195)

So

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = C_n C_n^{*} \left\langle{{n-1}} \vert {{n-1}}\right\rangle = {\left\lvert{C_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.196)

Expanding the LHS we have

\begin{aligned}{\left\lvert{C_n}\right\rvert}^2 &={\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} \\ &={\langle {n} \rvert} n {\lvert {n} \rangle} \\ &=n \left\langle{{n}} \vert {{n}}\right\rangle \\ &=n \end{aligned}

For

\begin{aligned}C_n = \sqrt{n}\end{aligned} \hspace{\stretch{1}}(9.197)

Similarly

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = B_n B_n^{*} \left\langle{{n+1}} \vert {{n+1}}\right\rangle = {\left\lvert{B_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.198)

and

\begin{aligned}{\left\lvert{B_n}\right\rvert}^2 &={\langle {n} \rvert} \underbrace{a a^\dagger}_{a a^\dagger - a^\dagger a = \mathbf{1}} {\lvert {n} \rangle} \\ &={\langle {n} \rvert} \left( \mathbf{1} + a^\dagger a \right) {\lvert {n} \rangle} \\ &=(1 + n) \left\langle{{n}} \vert {{n}}\right\rangle \\ &=1 + n \end{aligned}

for

\begin{aligned}B_n = \sqrt{n + 1}\end{aligned} \hspace{\stretch{1}}(9.199)

Heisenberg picture.

\paragraph{How does the lowering operator $a$ evolve in time?}

\paragraph{A:} Recall that for a general operator $A$, we have for the time evolution of that operator

\begin{aligned}i \hbar \frac{d A}{dt} = \left[{ A },{H}\right]\end{aligned} \hspace{\stretch{1}}(10.200)

Let’s solve this one.

\begin{aligned}i \hbar \frac{d a}{dt} &= \left[{ a },{H}\right] \\ &= \left[{ a },{ \hbar \omega (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ a^\dagger a }\right] \\ &= \hbar\omega \left( a a^\dagger a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a a^\dagger) a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a^\dagger a + \mathbf{1}) a - a^\dagger a a \right) \\ &= \hbar\omega a \end{aligned}

Even though $a$ is an operator, it can undergo a time evolution and we can think of it as a function, and we can solve for $a$ in the differential equation

\begin{aligned}\frac{d a}{dt} = -i \omega a \end{aligned} \hspace{\stretch{1}}(10.201)

This has the solution

\begin{aligned}a = a(0) e^{-i \omega t}\end{aligned} \hspace{\stretch{1}}(10.202)

here $a(0)$ is an operator, the value of that operator at $t = 0$. The exponential here is just a scalar (not effected by the operator so we can put it on either side of the operator as desired).

\paragraph{CHECK:}

\begin{aligned}a' = a(0) \frac{d}{dt} e^{-i \omega t} = a(0) (-i \omega) e^{-i \omega t} = -i \omega a\end{aligned} \hspace{\stretch{1}}(10.203)

A couple comments on the Schr\”{o}dinger picture.

We don’t do this in class, but it is very similar to the approach of the hydrogen atom. See the text for full details.

In the Schr\”{o}dinger picture,

\begin{aligned}-\frac{\hbar^2}{2m} \frac{d^2 u}{dx^2} + \frac{1}{{2}} m \omega^2 x^2 u = E u\end{aligned} \hspace{\stretch{1}}(11.204)

This does directly to the wave function representation, but we can relate these by noting that we get this as a consequence of the identification $u = u(x) = \left\langle{{x}} \vert {{u}}\right\rangle$.

In 11.204, we can switch to dimensionless quantities with

\begin{aligned}\xi = \text{xi (z)''} = \alpha x\end{aligned} \hspace{\stretch{1}}(11.205)

with

\begin{aligned}\alpha = \sqrt{\frac{m \omega}{\hbar}}\end{aligned} \hspace{\stretch{1}}(11.206)

This gives, with $\lambda = 2E/\hbar\omega$,

\begin{aligned}\frac{d^2 u}{d\xi^2} + (\lambda - \xi^2) u = 0\end{aligned} \hspace{\stretch{1}}(11.207)

We can use polynomial series expansion methods to solve this, and find that we require a terminating expression, and write this in terms of the Hermite polynomials (courtesy of the clever French once again).

When all is said and done we will get the energy eigenvalues once again

\begin{aligned}E = E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(11.208)

Back to the Heisenberg picture.

Let us express

\begin{aligned}\left\langle{{x}} \vert {{n}}\right\rangle = u_n(x)\end{aligned} \hspace{\stretch{1}}(12.209)

With

\begin{aligned}a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(12.210)

we have

\begin{aligned}0 =\left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle},\end{aligned} \hspace{\stretch{1}}(12.211)

and

\begin{aligned}0 &= {\langle {x} \rvert} \left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle} \\ &= {\langle {x} \rvert} X {\lvert {0 } \rangle} + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ &= x \left\langle{{x}} \vert {{0}}\right\rangle + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ \end{aligned}

Recall that our matrix operator is

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\end{aligned} \hspace{\stretch{1}}(12.212)

\begin{aligned}{\langle {x} \rvert} P {\lvert {0} \rangle} &={\langle {x} \rvert} P \underbrace{\int {\lvert {x'} \rangle} {\langle {x'} \rvert} dx' }_{= \mathbf{1}}{\lvert {0} \rangle} \\ &=\int {\langle {x} \rvert} P {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\int \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\left( -i \hbar \frac{d}{dx} \right)\left\langle{{x}} \vert {{0}}\right\rangle\end{aligned}

We have then

\begin{aligned}0 =x u_0(x) + \frac{\hbar}{m \omega} \frac{d u_0(x)}{dx}\end{aligned} \hspace{\stretch{1}}(12.213)

NOTE: picture of the solution to this LDE on slide…. but I didn’t look closely enough.