Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘interaction picture’

One more adiabatic pertubation derivation.

Posted by peeterjoot on December 8, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

I liked one of the adiabatic pertubation derivations that I did to review the material, and am recording it for reference.

Build up.

In time dependent pertubation we started after noting that our ket in the interaction picture, for a Hamiltonian H = H_0 + H'(t), took the form

\begin{aligned}{\left\lvert {\alpha_S(t)} \right\rangle} = e^{-i H_0 t/\hbar} {\left\lvert {\alpha_I(t)} \right\rangle} = e^{-i H_0 t/\hbar} U_I(t) {\left\lvert {\alpha_I(0)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here we have basically assumed that the time evolution can be factored into a portion dependent on only the static portion of the Hamiltonian, with some other operator U_I(t), providing the remainder of the time evolution. From 2.1 that operator U_I(t) is found to behave according to

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} = e^{i H_0 t/\hbar} H'(t) e^{-i H_0 t/\hbar} U_I,\end{aligned} \hspace{\stretch{1}}(2.2)

but for our purposes we just assumed it existed, and used this for motivation. With the assumption that the interaction picture kets can be written in terms of the basis kets for the system at t=0 we write our Schr\”{o}dinger ket as

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i H_0 t/\hbar} a_k(t) {\left\lvert {k} \right\rangle}= \sum_k e^{-i \omega_k t/\hbar} a_k(t) {\left\lvert {k} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.3)

where {\left\lvert {k} \right\rangle} are the energy eigenkets for the initial time equation problem

\begin{aligned}H_0 {\left\lvert {k} \right\rangle} = E_k^0 {\left\lvert {k} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.4)

Adiabatic case.

For the adiabatic problem, we assume the system is changing very slowly, as described by the instantanious energy eigenkets

\begin{aligned}H(t) {\left\lvert {k(t)} \right\rangle} = E_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.5)

Can we assume a similar representation to 2.3 above, but allow {\left\lvert {k} \right\rangle} to vary in time? This doesn’t quite work since {\left\lvert {k(t)} \right\rangle} are no longer eigenkets of H_0

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i H_0 t/\hbar} a_k(t) {\left\lvert {k(t)} \right\rangle}\ne \sum_k e^{-i \omega_k t} a_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.6)

Operating with e^{i H_0 t/\hbar} does not give the proper time evolution of {\left\lvert {k(t)} \right\rangle}, and we will in general have a more complex functional dependence in our evolution operator for each {\left\lvert {k(t)} \right\rangle}. Instead of an \omega_k t dependence in this time evolution operator let’s assume we have some function \alpha_k(t) to be determined, and can write our ket as

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i \alpha_k(t)} a_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.7)

Operating on this with our energy operator equation we have

\begin{aligned}0 &=\left(H - i \hbar \frac{d}{dt} \right) {\left\lvert {\psi} \right\rangle} \\ &=\left(H - i \hbar \frac{d}{dt} \right) \sum_k e^{-i \alpha_k} a_k {\left\lvert {k} \right\rangle} \\ &=\sum_k e^{-i \alpha_k(t)} \left( \left( E_k a_k-i \hbar (-i \alpha_k' a_k + a_k')\right) {\left\lvert {k} \right\rangle}-i \hbar a_k {\left\lvert {k'} \right\rangle}\right) \\ \end{aligned}

Here I’ve written {\left\lvert {k'} \right\rangle} = d{\left\lvert {k} \right\rangle}/dt. In our original time dependent pertubaton the -i \alpha_k' term was -i \omega_k, so this killed off the E_k. If we assume this still kills off the E_k, we must have

\begin{aligned}\alpha_k = \frac{1}{{\hbar}} \int_0^t E_k(t') dt',\end{aligned} \hspace{\stretch{1}}(3.8)

and are left with

\begin{aligned}0=\sum_k e^{-i \alpha_k(t)} \left( a_k' {\left\lvert {k} \right\rangle}+a_k {\left\lvert {k'} \right\rangle}\right).\end{aligned} \hspace{\stretch{1}}(3.9)

Bra’ing with {\left\langle {m} \right\rvert} we have

\begin{aligned}0=e^{-i \alpha_m(t)} a_m' +e^{-i \alpha_m(t)} a_m \left\langle{{m}} \vert {{m'}}\right\rangle+\sum_{k \ne m} e^{-i \alpha_k(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.10)

or

\begin{aligned}a_m' +a_m \left\langle{{m}} \vert {{m'}}\right\rangle=-\sum_{k \ne m} e^{-i \alpha_k(t)} e^{i \alpha_m(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.11)

The LHS is a perfect differential if we introduce an integration factor e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle}, so we can write

\begin{aligned}e^{-\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle} ( a_m e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } )'=-\sum_{k \ne m} e^{-i \alpha_k(t)} e^{i \alpha_m(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.12)

This suggests that we want to form a new function

\begin{aligned}b_m = a_m e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } \end{aligned} \hspace{\stretch{1}}(3.13)

or

\begin{aligned}a_m = b_m e^{-\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } \end{aligned} \hspace{\stretch{1}}(3.14)

Plugging this into our assumed representation we have a more concrete form

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- \int_0^t dt' ( i \omega_k + \left\langle{{k}} \vert {{k'}}\right\rangle ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.15)

Writing

\begin{aligned}\Gamma_k = i \left\langle{{k}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.16)

this becomes

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.17)

A final pass.

Now that we have what appears to be a good representation for any given state if we wish to examine the time evolution, let’s start over, reapplying our instantaneous energy operator equality

\begin{aligned}0 &=\left(H - i \hbar \frac{d}{dt} \right){\left\lvert {\psi} \right\rangle}  \\ &=\left(H - i \hbar \frac{d}{dt} \right)\sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k {\left\lvert {k} \right\rangle} \\ &=- i \hbar \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } \left(i \Gamma_kb_k {\left\lvert {k} \right\rangle} +b_k' {\left\lvert {k} \right\rangle} +b_k {\left\lvert {k'} \right\rangle} \right).\end{aligned}

Bra’ing with {\left\langle {m} \right\rvert} we find

\begin{aligned}0&=e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } i \Gamma_mb_m +e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } b_m' \\ &+e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } b_m \left\langle{{m}} \vert {{m'}}\right\rangle +\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle \end{aligned}

Since i \Gamma_m = \left\langle{{m}} \vert {{m'}}\right\rangle the first and third terms cancel leaving us just

\begin{aligned}b_m'=-\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_{km} - \Gamma_{km} ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.18)

where \omega_{km} = \omega_k - \omega_m and \Gamma_{km} = \Gamma_k - \Gamma_m.

Summary

We assumed that a ket for the system has a representation in the form

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i \alpha_k(t) } a_k(t) {\left\lvert {k(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(4.20)

where a_k(t) and \alpha_k(t) are given or to be determined. Application of our energy operator identity provides us with an alternate representation that simplifes the results

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(4.20)

With

\begin{aligned}{\left\lvert {m'} \right\rangle} &= \frac{d}{dt} {\left\lvert {m} \right\rangle} \\ \Gamma_k &= i \left\langle{{m}} \vert {{m'}}\right\rangle \\ \omega_{km} &= \omega_k - \omega_m \\ \Gamma_{km} &= \Gamma_k - \Gamma_m\end{aligned} \hspace{\stretch{1}}(4.21)

we find that our dynamics of the coefficients are related by

\begin{aligned}b_m'=-\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_{km} - \Gamma_{km} ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(4.25)

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 7 (Taught by Prof J.E. Sipe). Time dependent perturbation

Posted by peeterjoot on October 3, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Recap: Interaction picture

We’ll use the interaction picture to examine time dependent perturbations. We wrote our Schr\”{o}dinger ket in terms of the interaction ket

\begin{aligned}{\lvert {\psi} \rangle}= e^{-i H_0 (t - t_0)/\hbar}{\lvert {\psi_I(t)} \rangle},\end{aligned} \hspace{\stretch{1}}(1.1)

where

\begin{aligned}{\lvert {\psi_I} \rangle}= U_I(t, t_0) {\lvert {\psi_I(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.2)

Our dynamics is given by the operator equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) = \bar{H}'(t) U_I(t, t_0),\end{aligned} \hspace{\stretch{1}}(1.3)

where

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t) e^{-\frac{i}{\hbar} H_0(t - t_0)}.\end{aligned} \hspace{\stretch{1}}(1.4)

We can formally solve 1.3 by writing

\begin{aligned}U_I(t, t_0) = I - \frac{i}{\hbar} \int_{t_0}^t dt' \bar{H}'(t') U_I(t', t_0).\end{aligned} \hspace{\stretch{1}}(1.5)

This is easy enough to verify by direct differentiation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I&=\left(\int_{t_0}^t dt' \bar{H}'(t') U_I(t', t_0) \right)' \\ &=\bar{H}'(t) U_I(t, t_0) \frac{dt}{dt}-\bar{H}'(t) U_I(t, t_0) \frac{dt_0}{dt} \\ &=\bar{H}'(t) U_I(t, t_0)\end{aligned}

This is a bit of a chicken and an egg expression, since it is cyclic with a dependency on unknown U_I(t', t_0) factors.

We start with an initial estimate of the operator to be determined, and iterate. This can seem like an odd thing to do, but one can find books on just this integral kernel iteration method (like the nice little Dover book [1] that has sat on my (Peeter’s) shelf all lonely so many years).

Suppose for t near t_0, try

\begin{aligned}U_I(t, t_0) \approx I - \frac{i}{\hbar} \int_{t_0}^t dt' \bar{H}'(t').\end{aligned} \hspace{\stretch{1}}(1.6)

A second order iteration is now possible

\begin{aligned}\begin{aligned}U_I(t, t_0)&\approx I - \frac{i}{\hbar} \int_{t_0}^t dt' \bar{H}'(t') \left(I - \frac{i}{\hbar} \int_{t_0}^{t'} dt'' \bar{H}'(t'').\right) \\ &=I - \frac{i}{\hbar} \int_{t_0}^t dt' \bar{H}'(t') + \left(\frac{-i}{\hbar}\right)^2\int_{t_0}^t dt' \bar{H}'(t') \int_{t_0}^{t'} dt'' \bar{H}'(t'')\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.7)

It is possible to continue this iteration, and this approach is considered in some detail in section 3.3 of the text [2], and is apparently also the basis for Feynman diagrams.

Time dependent perturbation theory.

As covered in section 17 of the text, we’ll split the interaction into time independent and time dependent terms

\begin{aligned}H(t) = H_0 + H'(t),\end{aligned} \hspace{\stretch{1}}(2.8)

and work in the interaction picture with

\begin{aligned}{\lvert {\psi_I(t)} \rangle} = \sum_n \tilde{c}_n(t) {\lvert {\psi_n^{(0)} } \rangle}.\end{aligned} \hspace{\stretch{1}}(2.9)

Our Schr\”{o}dinger ket is then

\begin{aligned}\begin{aligned}{\lvert {\psi(t} \rangle}&=e^{-i H_0^{(0)}(t- t_0)/\hbar}{\lvert {\psi_I(t_0) } \rangle} \\ &=\sum_n \tilde{c}_n(t)e^{-i E_n^{(0)}(t- t_0)/\hbar}{\lvert {\psi_n^{(0)} } \rangle}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.10)

With a definition

\begin{aligned}c_n(t) = \tilde{c}_n(t) e^{i E_n t_0/\hbar},\end{aligned} \hspace{\stretch{1}}(2.11)

(where we leave off the zero superscript for the unperturbed state), our time evolved ket becomes

\begin{aligned}{\lvert {\psi(t} \rangle}=\sum_n c_n(t)e^{-i E_n t/\hbar}{\lvert {\psi_n^{(0)} } \rangle}.\end{aligned} \hspace{\stretch{1}}(2.12)

We can now plug 2.9 into our evolution equation

\begin{aligned}i\hbar \frac{d{{}}}{dt} {\lvert {\psi_I(t)} \rangle}&=\bar{H}'(t) {\lvert {\psi_I(t)} \rangle} \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t) e^{-\frac{i}{\hbar} H_0(t - t_0)}{\lvert {\psi_I(t)} \rangle},\end{aligned}

which gives us

\begin{aligned}i \hbar \sum_p \frac{\partial {}}{\partial {t}}\tilde{c}_p(t) {\lvert {\psi_p^{(0)} } \rangle}=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t) e^{-\frac{i}{\hbar} H_0(t - t_0)}\sum_n\tilde{c}_n(t) {\lvert {\psi_n^{(0)} } \rangle}.\end{aligned} \hspace{\stretch{1}}(2.13)

We can apply the bra {\langle {\psi_m^{(0)}} \rvert} to this equation, yielding

\begin{aligned}i \hbar \frac{\partial {}}{\partial {t}}\tilde{c}_m(t)=\sum_n\tilde{c}_n(t)e^{\frac{i}{\hbar} E_m(t - t_0)}{\langle {\psi_m^{(0)}} \rvert} H'(t){\lvert {\psi_n^{(0)} } \rangle}e^{-\frac{i}{\hbar} E_n(t - t_0)}.\end{aligned} \hspace{\stretch{1}}(2.14)

With

\begin{aligned}\omega_m &= \frac{E_m}{\hbar} \\ \omega_{mn} &= \omega_m - \omega_n \\ H_{mn}'(t) &= {\langle {\psi_m^{(0)}} \rvert} H'(t) {\lvert {\psi_n^{(0)} } \rangle},\end{aligned} \hspace{\stretch{1}}(2.15)

this is

\begin{aligned}i \hbar \frac{\partial {\tilde{c}_m(t) }}{\partial {t}}=\sum_n\tilde{c}_n(t)e^{\frac{i}{\hbar} \omega_{mn}(t - t_0)}H_{mn}'(t)\end{aligned} \hspace{\stretch{1}}(2.18)

Inverting 2.11 and plugging in

\begin{aligned}\tilde{c}_n(t) = c_n(t) e^{-i \omega_n t_0},\end{aligned} \hspace{\stretch{1}}(2.19)

yields

\begin{aligned}i \hbar \frac{\partial {c_m(t)}}{\partial {t}}e^{-i \omega_m t_0}=\sum_nc_n(t) e^{-i \omega_n t_0}e^{i\omega_{mn}t}e^{-i(\omega_m -\omega_n) t_0}H_{mn}'(t),\end{aligned} \hspace{\stretch{1}}(2.20)

from which we can cancel the exponentials on both sides yielding

\begin{aligned}i \hbar \frac{\partial {c_m(t)}}{\partial {t}}=\sum_nc_n(t)e^{i\omega_{mn}t}H_{mn}'(t)\end{aligned} \hspace{\stretch{1}}(2.21)

We are now left with all of our time dependence nicely separated out, with the coefficients c_n(t) encoding all the non-oscillatory time evolution information

\begin{aligned}H &= H_0 + H'(t) \\ {\lvert {\psi(t)} \rangle} &= \sum_n c_n(t) e^{-i\omega_n t} {\lvert {\psi_n^{(0)}} \rangle} \\ i \hbar \cdot_m &= \sum_n H_{mn}'(t) e^{i \omega_{mn} t} c_n(t)\end{aligned} \hspace{\stretch{1}}(2.22)

Perturbation expansion.

We now introduce our \lambda parametrization

\begin{aligned}H'(t) \rightarrow \lambda H'(t),\end{aligned} \hspace{\stretch{1}}(3.25)

and hope for convergence, or at least something that at least has well defined asymptotic behavior. We have

\begin{aligned}i \hbar \cdot_m = \lambda \sum_n H_{mn}'(t) e^{i \omega_{mn} t} c_n(t),\end{aligned} \hspace{\stretch{1}}(3.26)

and try

\begin{aligned}c_m(t) = c_m^{(0)}(t) + \lambda c_m^{(1)}(t) + \lambda^2 c_m^{(2)}(t) + \cdots\end{aligned} \hspace{\stretch{1}}(3.27)

Plugging in, we have

\begin{aligned}i \hbar\sum_k\lambda^k \cdot_m^{(k)}(t)=\sum_{n,p} H_{mn}'(t) e^{i \omega_{mn} t}\lambda^{p+1} c_n^{(p)}(t).\end{aligned} \hspace{\stretch{1}}(3.28)

As before, for equality, we treat this as an equation for each \lambda^k. Expanding explicitly for the first few powers, gives us

\begin{aligned}0&= \lambda^0 \left( i \hbar \cdot_m^{(0)}(t) - 0 \right) \\ &+ \lambda^1 \left( i \hbar \cdot_m^{(1)}(t) -\sum_{n} H_{mn}'(t) e^{i \omega_{mn} t}c_n^{(0)}(t)\right) \\ &+ \lambda^2 \left( i \hbar \cdot_m^{(2)}(t) -\sum_{n} H_{mn}'(t) e^{i \omega_{mn} t}c_n^{(1)}(t)\right) \\ &\dot{v}s\end{aligned}

Suppose we have a set of energy levels as depicted in figure (\ref{fig:qmTwoL7fig1})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL7fig1}
\caption{Perturbation around energy level s.}
\end{figure}

With c_n^{(i)} = 0 before the perturbation for all i \ge 1, n and c_m^{(0)} = \delta_{ms}, we can proceed iteratively, solving each equation, starting with

\begin{aligned}i \hbar \cdot_m^{(1)} = H_{ms}'(t) e^{i \omega_{ms} t}\end{aligned} \hspace{\stretch{1}}(3.29)

Example: Slow nucleus passing an atom.

\begin{aligned}H'(t) = - \boldsymbol{\mu} \cdot \mathbf{E}(t)\end{aligned} \hspace{\stretch{1}}(3.35)

with

\begin{aligned}H_{ms}' = -\boldsymbol{\mu}_{ms} \cdot \mathbf{E}(t),\end{aligned} \hspace{\stretch{1}}(3.36)

where

\begin{aligned}\boldsymbol{\mu}_{ms} ={\langle {\psi_m^{(0)}} \rvert}\boldsymbol{\mu}{\lvert {\psi_s^{(0)}} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.37)

Using our previous nucleus passing an atom example, as depicted in figure (\ref{fig:qmTwoL7fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL7fig2}
\caption{Slow nucleus passing an atom.}
\end{figure}

We have

\begin{aligned}\boldsymbol{\mu} = \sum_i q_i \mathbf{R}_i,\end{aligned} \hspace{\stretch{1}}(3.38)

the dipole moment for each of the charges in the atom. We will have fields as depicted in figure (\ref{fig:qmTwoL7fig3})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL7fig3}
\caption{Fields for nucleus atom example.}
\end{figure}

FIXME: think through.

Example: Electromagnetic wave pulse interacting with an atom.

Consider a EM wave pulse, perhaps Gaussian, of the form depicted in figure (\ref{fig:qmTwoL7fig4})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL7fig4}
\caption{Atom interacting with an EM pulse.}
\end{figure}

\begin{aligned}E_y(t) = e^{-t^2/T^2} \cos(\omega_0 t).\end{aligned} \hspace{\stretch{1}}(3.39)

As we learned very early, perhaps sitting on our mother’s knee, we can solve the differential equation 3.29 for the first order perturbation, by direct integration

\begin{aligned}c_m^{(1)}(t) =\frac{1}{{i \hbar}} \int_{-\infty}^tH_{ms}'(t') e^{i \omega_{ms} t'} dt'.\end{aligned} \hspace{\stretch{1}}(3.35)

Here the perturbation is assumed equal to zero at -\infty. Suppose our electric field is specified in terms of a Fourier transform

\begin{aligned}\mathbf{E}(t) = \int_{-\infty}^\infty \frac{d \omega}{2\pi} \mathbf{E}(\omega) e^{-i \omega t},\end{aligned} \hspace{\stretch{1}}(3.36)

so

\begin{aligned}c_m^{(1)}(t) =\frac{\boldsymbol{\mu}_{ms}}{2 \pi i \hbar} \cdot\int_{-\infty}^\infty \int_{-\infty}^t\mathbf{E}(\omega)e^{i (\omega_{ms} -\omega) t'} dt' d\omega.\end{aligned} \hspace{\stretch{1}}(3.37)

From this, “after the perturbation”, as t \rightarrow \infty we find

\begin{aligned}c_m^{(1)}(\infty)&=\frac{\boldsymbol{\mu}_{ms}}{2 \pi i \hbar} \cdot\int_{-\infty}^\infty \int_{-\infty}^\infty \mathbf{E}(\omega)e^{i (\omega_{ms} -\omega) t'} dt' d\omega \\ &=\frac{\boldsymbol{\mu}_{ms}}{i \hbar} \cdot\int_{-\infty}^\infty \mathbf{E}(\omega)\delta(\omega_{ms} - \omega)d\omega\end{aligned}

since we identify

\begin{aligned}\frac{1}{{2 \pi}}\int_{-\infty}^\infty e^{i (\omega_{ms} -\omega) t'} dt' \equiv \delta(\omega_{ms} - \omega)\end{aligned} \hspace{\stretch{1}}(3.38)

Thus the steady state first order perturbation coefficient is

\begin{aligned}c_m^{(1)}(\infty)=\frac{\boldsymbol{\mu}_{ms}}{i \hbar} \cdot\mathbf{E}(\omega_{ms}).\end{aligned} \hspace{\stretch{1}}(3.39)

Frequency symmetry for the Fourier spectrum of a real field.

We will look further at this next week, but we first require an intermediate result from transform theory. Because our field is real, we have

\begin{aligned}\mathbf{E}^{*}(t) = \mathbf{E}(t)\end{aligned} \hspace{\stretch{1}}(3.40)

so

\begin{aligned}\mathbf{E}^{*}(t)&= \int \frac{d\omega}{2 \pi} \mathbf{E}^{*}(\omega) e^{i \omega t} \\ &= \int \frac{d\omega}{2 \pi} \mathbf{E}^{*}(-\omega) e^{-i \omega t} \\ \end{aligned}

and thus

\begin{aligned}\mathbf{E}(\omega) = \mathbf{E}^{*}(-\omega),\end{aligned} \hspace{\stretch{1}}(3.41)

and

\begin{aligned}{\left\lvert{\mathbf{E}(\omega)}\right\rvert}^2 = {\left\lvert{\mathbf{E}(-\omega)}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.42)

We will see shortly what the point of this aside is.

References

[1] F.G. Tricomi. Integral equations. Dover Pubns, 1985.

[2] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 6 (Taught by Prof J.E. Sipe). Interaction picture.

Posted by peeterjoot on September 27, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Interaction picture.

Recap.

Recall our table comparing our two interaction pictures

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

A motivating example.

While fundamental Hamiltonians are independent of time, in a number of common cases, we can form approximate Hamiltonians that are time dependent. One such example is that of Coulomb excitations of an atom, as covered in section 18.3 of the text [1], and shown in figure (\ref{fig:qmTwoL6fig1}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig1}
\caption{Coulomb interaction of a nucleus and heavy atom.}
\end{figure}

We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be considered classically. From the atoms point of view, the effects of the heavy nucleus barreling by can be described using a time dependent Hamiltonian. For the atom, that interaction Hamiltonian is

\begin{aligned}H' = \sum_i \frac{ Z e q_i }{{\left\lvert{\mathbf{r}_N(t) - \mathbf{R}_i}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here and \mathbf{r}_N is the position vector for the heavy nucleus, and \mathbf{R}_i is the position to each charge within the atom, where i ranges over all the internal charges, positive and negative, within the atom.

Placing the origin close to the atom, we can write this interaction Hamiltonian as

\begin{aligned}H'(t) = \not{{\sum_i \frac{Z e q_i}{{\left\lvert{\mathbf{r}_N(t)}\right\rvert}}}}+ \sum_i Z e q_i \mathbf{R}_i \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{r}}} \frac{1}{{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{r} = 0}}\end{aligned} \hspace{\stretch{1}}(2.2)

The first term vanishes because the total charge in our neutral atom is zero. This leaves us with

\begin{aligned}\begin{aligned}H'(t) &= -\sum_i q_i \mathbf{R}_i \cdot {\left.{{\left(-\frac{\partial {}}{\partial {\mathbf{r}}} \frac{ Z e}{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}\right)}}\right\vert}_{{\mathbf{r} = 0}} \\ &= - \sum_i q_i \mathbf{R}_i \cdot \mathbf{E}(t),\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where \mathbf{E}(t) is the electric field at the origin due to the nucleus.

Introducing a dipole moment operator for the atom

\begin{aligned}\boldsymbol{\mu} = \sum_i q_i \mathbf{R}_i,\end{aligned} \hspace{\stretch{1}}(2.4)

the interaction takes the form

\begin{aligned}H'(t) = -\boldsymbol{\mu} \cdot \mathbf{E}(t).\end{aligned} \hspace{\stretch{1}}(2.5)

Here we have a quantum mechanical operator, and a classical field taken together. This sort of dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated classically as depicted in figure (\ref{fig:qmTwoL6fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig2}
\caption{atom in a field}
\end{figure}

In the figure, we can use the dipole interaction, provided \lambda \gg a, where a is the “width” of the atom.

Because it is great for examples, we will see this dipole interaction a lot.

The interaction picture.

Having talked about both the Schr\”{o}dinger and Heisenberg pictures, we can now move on to describe a hybrid, one where our Hamiltonian has been split into static and time dependent parts

\begin{aligned}H(t) = H_0 + H'(t)\end{aligned} \hspace{\stretch{1}}(2.6)

We will formulate an approach for dealing with problems of this sort called the interaction picture.

This is also covered in section 3.3 of the text, albeit in a much harder to understand fashion (the text appears to try to not pull the result from a magic hat, but the steps to get to the end result are messy). It would probably have been nicer to see it this way instead.

In the Schr\”{o}dinger picture our dynamics have the form

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.7)

How about the Heisenberg picture? We look for a solution

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.8)

We want to find this operator that evolves the state from the state as some initial time t_0, to the arbitrary later state found at time t. Plugging in we have

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}=H(t) U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.9)

This has to hold for all {\lvert {\psi_s(t_0)} \rangle}, and we can equivalently seek a solution of the operator equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) = H(t) U(t, t_0),\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}U(t_0, t_0) = I,\end{aligned} \hspace{\stretch{1}}(2.11)

the identity for the Hilbert space.

Suppose that H(t) was independent of time. We could find that

\begin{aligned}U(t, t_0) = e^{-i H(t - t_0)/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.12)

If H(t) depends on time could you guess that

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}\end{aligned} \hspace{\stretch{1}}(2.13)

holds? No. This may be true when H(t) is a number, but when it is an operator, the Hamiltonian does not necessarily commute with itself at different times

\begin{aligned}\left[{H(t')},{H(t'')}\right] \ne 0.\end{aligned} \hspace{\stretch{1}}(2.14)

So this is wrong in general. As an aside, for numbers, 2.13 can be verified easily. We have

\begin{aligned}i \hbar \left( e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \right)'&=i \hbar \left( -\frac{i}{\hbar} \right) \left( \int_{t_0}^t H(\tau) d\tau \right)'e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau } \\ &=\left( H(t) \frac{dt}{dt} - H(t_0) \frac{dt_0}{dt} \right)e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}  \\ &= H(t) U(t, t_0)\end{aligned}

Expectations

Suppose that we do find U(t, t_0). Then our expectation takes the form

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} = {\langle {\psi_s(t_0)} \rvert} U^\dagger(t, t_0) O_s U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \end{aligned} \hspace{\stretch{1}}(2.15)

Put

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.16)

and form

\begin{aligned}O_H = U^\dagger(t, t_0) O_s U(t, t_0) \end{aligned} \hspace{\stretch{1}}(2.17)

so that our expectation has the familiar representations

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \end{aligned} \hspace{\stretch{1}}(2.18)

New strategy. Interaction picture.

Let’s define

\begin{aligned}U_I(t, t_0) = e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\end{aligned} \hspace{\stretch{1}}(2.19)

or

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.20)

Let’s see how this works. We have

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} &= i \hbar \frac{d{{}}}{dt} \left(e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( i \hbar \frac{d{{}}}{dt} U(t, t_0) \right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( (H + H'(t)) U(t, t_0) \right) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) U(t, t_0) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned}

Define

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.21)

so that our operator equation takes the form

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) = \bar{H}'(t) U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.22)

Note that we also have the required identity at the initial time

\begin{aligned}U_I(t_0, t_0) = I.\end{aligned} \hspace{\stretch{1}}(2.23)

Without requiring us to actually find U(t, t_0) all of the dynamics of the time dependent interaction are now embedded in our operator equation for \bar{H}'(t), with all of the simple interaction related to the non time dependent portions of the Hamiltonian left separate.

Connection with the Schr\”{o}dinger picture.

In the Schr\”{o}dinger picture we have

\begin{aligned}{\lvert {\psi_s(t)} \rangle} &= U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}  \\ &=e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0){\lvert {\psi_s(t_0)} \rangle}.\end{aligned}

With a definition of the interaction picture ket as

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle} = U_I(t, t_0) {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.24)

the Schr\”{o}dinger picture is then related to the interaction picture by

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = e^{-\frac{i}{\hbar} H_0(t - t_0)} {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.25)

Also, by multiplying 2.22 by our Schr\”{o}dinger ket, we remove the last vestiges of U_I and U from the dynamical equation for our time dependent interaction

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.26)

Interaction picture expectation.

Inverting 2.25, we can form an operator expectation, and relate it the interaction and Schr\”{o}dinger pictures

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)}{\lvert {\psi_I} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.27)

With a definition

\begin{aligned}O_I =e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.28)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.29)

As before, the time evolution of our interaction picture operator, can be found by taking derivatives of 2.28, for which we find

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.30)

Summarizing the interaction picture.

Given

\begin{aligned}H(t) = H_0 + H'(t),\end{aligned} \hspace{\stretch{1}}(2.31)

and initial time states

\begin{aligned}{\lvert {\psi_I(t_0)} \rangle} ={\lvert {\psi_s(t_0)} \rangle} = {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.32)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.33)

where

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.34)

and

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.35)

or

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) &= \bar{H}'(t) U_I(t, t_0) \\ U_I(t_0, t_0) &= I.\end{aligned} \hspace{\stretch{1}}(2.36)

Our interaction picture Hamiltonian is

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.38)

and for Schr\”{o}dinger operators, independent of time, we have the dynamical equation

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.39)

Justifying the Taylor expansion above (not class notes).

Multivariable Taylor series

As outlined in section 2.8 (8.10) of [2], we want to derive the multi-variable Taylor expansion for a scalar valued function of some number of variables

\begin{aligned}f(\mathbf{u}) = f(u^1, u^2, \cdots),\end{aligned} \hspace{\stretch{1}}(3.40)

consider the displacement operation applied to the vector argument

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{f(\mathbf{a} + t \mathbf{x})}}\right\vert}_{{t=1}}.\end{aligned} \hspace{\stretch{1}}(3.41)

We can Taylor expand a single variable function without any trouble, so introduce

\begin{aligned}g(t) = f(\mathbf{a} + t \mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.42)

where

\begin{aligned}g(1) = f(\mathbf{a} + \mathbf{x}).\end{aligned} \hspace{\stretch{1}}(3.43)

We have

\begin{aligned}g(t) = g(0) + t {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{t^2}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots,\end{aligned} \hspace{\stretch{1}}(3.44)

so that

\begin{aligned}g(1) = g(0) + + {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{1}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots.\end{aligned} \hspace{\stretch{1}}(3.45)

The multivariable Taylor series now becomes a plain old application of the chain rule, where we have to evaluate

\begin{aligned}\frac{dg}{dt} &= \frac{d{{}}}{dt} f(a^1 + t x^1, a^2 + t x^2, \cdots) \\ &= \sum_i \frac{\partial {}}{\partial {(a^i + t x^i)}} f(\mathbf{a} + t \mathbf{x}) \frac{\partial {a^i + t x^i}}{\partial {t}},\end{aligned}

so that

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}}= \sum_i x^i \left( {\left.{{ \frac{\partial {f}}{\partial {x^i}}}}\right\vert}_{{x^i = a^i}}\right).\end{aligned} \hspace{\stretch{1}}(3.46)

Assuming an Euclidean space we can write this in the notationally more pleasant fashion using a gradient operator for the space

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}} = {\left.{{\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.47)

To handle the higher order terms, we repeat the chain rule application, yielding for example

\begin{aligned}{\left.{{\frac{d^2 f(\mathbf{a} + t \mathbf{x})}{dt^2} }}\right\vert}_{{t=0}} &={\left.{{\frac{d{{}}}{dt} \sum_i x^i \frac{\partial {f(\mathbf{a} + t \mathbf{x})}}{\partial {(a^i + t x^i)}} }}\right\vert}_{{t=0}}\\ &={\left.{{\sum_i x^i \frac{\partial {}}{\partial {(a^i + t x^i)}} \frac{d{{f(\mathbf{a} + t \mathbf{x})}}}{dt}}}\right\vert}_{{t=0}} \\ &={\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^2 f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned}

Thus the Taylor series associated with a vector displacement takes the tidy form

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} {\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^k f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.48)

Even more fancy, we can form the operator equation

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{e^{ \mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} } f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}\end{aligned} \hspace{\stretch{1}}(3.49)

Here a dummy variable \mathbf{u} has been retained as an instruction not to differentiate the \mathbf{x} part of the directional derivative in any repeated applications of the \mathbf{x} \cdot \boldsymbol{\nabla} operator.

That notational cludge can be removed by swapping \mathbf{a} and \mathbf{x}

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} (\mathbf{a} \cdot \boldsymbol{\nabla})^k f(\mathbf{x})=e^{ \mathbf{a} \cdot \boldsymbol{\nabla} } f(\mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.50)

where \boldsymbol{\nabla} = \boldsymbol{\nabla}_{\mathbf{x}} = ({\partial {}}/{\partial {x^1}}, {\partial {}}/{\partial {x^2}}, ...).

Having derived this (or for those with lesser degrees of amnesia, recall it), we can see that 2.2 was a direct application of this, retaining no second order or higher terms.

Our expression used in the interaction Hamiltonian discussion was

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{R} = 0}}.\end{aligned} \hspace{\stretch{1}}(3.51)

which we can see has the same structure as above with some variable substitutions. Evaluating it we have

\begin{aligned}\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}&=\mathbf{e}_i \frac{\partial {}}{\partial {R^i}} ((x^j - R^j)^2)^{-1/2} \\ &=\mathbf{e}_i \left(-\frac{1}{{2}}\right) 2 (x^j - R^j) \frac{\partial {(x^j - R^j)}}{\partial {R^i}} \frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3}} \\ &= \frac{\mathbf{r} - \mathbf{R}}{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3} ,\end{aligned}

and at \mathbf{R} = 0 we have

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3}.\end{aligned} \hspace{\stretch{1}}(3.52)

We see in this direction derivative produces the classical electric Coulomb field expression for an electrostatic distribution, once we take the \mathbf{r}/{\left\lvert{\mathbf{r}}\right\rvert}^3 and multiply it with the - Z e factor.

With algebra.

A different way to justify the expansion of 2.2 is to consider a Clifford algebra factorization (following notation from [3]) of the absolute vector difference, where \mathbf{R} is considered small.

\begin{aligned}{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}&= \sqrt{ \left(\mathbf{r} - \mathbf{R}\right) \left(\mathbf{r} - \mathbf{R}\right) } \\ &= \sqrt{ \left\langle{{\mathbf{r} \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) \mathbf{r}}}\right\rangle } \\ &= \sqrt{ \left\langle{{\mathbf{r}^2 \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) }}\right\rangle } \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \left\langle{{\frac{1}{\mathbf{r}} \mathbf{R} \mathbf{R} \frac{1}{\mathbf{r}}}}\right\rangle} \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \frac{\mathbf{R}^2}{\mathbf{r}^2}}\end{aligned}

Neglecting the \mathbf{R}^2 term, we can then Taylor series expand this scalar expression

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} \left( 1 + \frac{1}{\mathbf{r}} \cdot \mathbf{R}\right) =\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\hat{\mathbf{r}}}{\mathbf{r}^2} \cdot \mathbf{R}=\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3} \cdot \mathbf{R}.\end{aligned} \hspace{\stretch{1}}(3.53)

Observe this is what was found with the multivariable Taylor series expansion too.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[3] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »