Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for December, 2011

Pondering a question on conditional probability.

Posted by peeterjoot on December 27, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

A bit of confusion.

In [1] section 1.5 while discussing statistical uncertainty is a mention of conditional probability. Once told that a die only rolls numbers up to four, we have a conditional probability for the die of

\begin{aligned}P(i | i \le 4) = \frac{P[i U(i \le 4)]}{P(i \le 4)}= \frac{\frac{1}{{6}}}{\frac{4}{6}} = \frac{1}{{4}}.\end{aligned} \hspace{\stretch{1}}(1.1)

I was having trouble understanding the numerator in this expression. An initial confusion was, “what is this U”. I came to the conclusion that this is just a typo, and was meant to be set intersection, as in

\begin{aligned}P(i | i \le 4) = \frac{P[i \cap (i \le 4)]}{P(i \le 4)}= \frac{\frac{1}{{6}}}{\frac{4}{6}} = \frac{1}{{4}}.\end{aligned} \hspace{\stretch{1}}(1.2)

The denominator makes sense. I picture a sample space with 6 points as in figure (\ref{fig:conditionalProbStatMechalProbStatMechFig1})

Sample space for a die.

The points 1,2,3,4 represent the i \le 4 subspace, as in figure (\ref{fig:conditionalProbStatMechalProbStatMechFig2})

 

 

 

So we have a 4/6 probability for that compound event.

I found it easy to get mixed up considering the numerator. I was envisioning, as in figure (\ref{fig:conditionalProbStatMech:conditionalProbStatMechFig3})

a set of six intersecting with the set of points 1,2,3,4, which is just that set of four. To clear up the confusion, imagine instead that we are asking about the probability of finding the i = 2 face in the dice roll, given the fact that the die only rolls i = 1, 2, 3, 4. Our intersection sample space for that event is then shown in figure

The intersection is just the single point, so we have a 1/6 probability for that compound event. The final result makes sense, since if we are looking for the probability for any of the i = 1, 2, 3, 4 events given that the die will only roll one of these values, we should have a value of 1/4 as found through the compound probability formula.

References

[1] E.A. Jackson. Equilibrium statistical mechanics. Dover Pubns, 2000.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

On instant messaging in the work place.

Posted by peeterjoot on December 16, 2011

When I started at IBM way too many years ago, there were three main forms of development communication:

  • email
  • phone
  • office drop by

Now there’s one more in the list, the ‘instant message’.  We have a homegrown system called sametime, but it doesn’t differ significantly from any of the other well known chat systems like Windows Live Messenger or even facebook.

I personally find this sort of chat system very intrusive, and have converged on a protocol that is acceptable to me for dealing with them.

  • I don’t expect anybody else to answer in real time, and do not attempt to respond myself to any such message in real time.  It is rare that I’ll  wait around hovering over my keyboard while I see “XXX is typing”, so that I can reply, and don’t expect this of anybody else either.  When or if I happen to notice the message, and if the time is convenient, I’ll look at it.
  • If I don’t answer, I expect that the sender will follow up in email if required.
  • If the message request says something without any technical content, like “hi”, “hello”, and so forth, I generally close it without response.
  • After sending any message to anybody else, I close my chat window.  If they choose to answer it at some point, then it reappears with their response.

With many people working remotely these days, the instant messaging system does often work better than a phone call, and it is great for sharing text with less delay than email, but I think it is generally unrealistic to expect any sort of interactive response in the workplace when using such a chat system, because you have no idea what else the person at the other end is doing.

 

Posted in Incoherent ramblings | Tagged: , , , | Leave a Comment »

Emails with code review requests.

Posted by peeterjoot on December 13, 2011

We have email distribution lists for various code components in our product and requests for review end up being sent to these lists.

Twice this week I’ve sent emails in response to code reviews along the lines of:

“Please re-send your request with enough information to give the potential reviewer a clue what they should be looking at.”

It does not appear obvious to some people to do this. Our distribution lists is serviced by a number of possible people, with one person selected as the “victim of the week” to own answering the email and/or doing whatever defect work ends up generated by that email if any.

For review requests, it may be that somebody who knows the code better than the person monitoring the list for the week can review, but if you don’t describe the change in more detail then that possible “better reviewer” would never know to look.

Leaving half the work to the reader of the email that can be easily justified in a number of circumstances. However, if your email is distributed, then an attempt to save a bit of time in drafting it, means that all the readers incur the costs. The cost of email servicing can really add up, so make them count.

A review request should include the following

  • Defect # and abstract if appropriate.
  • High level description of what’s being changed, and why.
  • Filenames and function names of what is being changed. If this is a lot then details can be provided outside of the email.
  • Context diffs (for example those generated by (gnu) ‘diff -p -U10’. Many people have preferences on how they compare files, so also provide the new and original versions of the files. Do not attach these to the email, but include a path to where these are available.
  • Instead of providing copies of the files, it may make sense to provide information about the version control sandbox that contains the files. Do this in an effective way ((*) see below).

(*) Perhaps unique to clearcase, there can be bad ways to share information about your changes. It is common for people to share the clearcase branch name that they’ve used. Also provide the viewname (if that view is accessible to your reviewer), and the config spec (assuming your view is not accessible to the reviewer). What your changed files are branched from many not be obvious, and these may not even be visible to the reviewer in their view, if a specific version of the directory element is required for the file to be visible (i.e.: when you have created a new file).

Posted in Incoherent ramblings | Tagged: , , , , , , | 2 Comments »

Second form of adiabatic approximation.

Posted by peeterjoot on December 11, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

In class we were shown an adiabatic approximation where we started with (or worked our way towards) a representation of the form

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k c_k(t) e^{-i \int_0^t (\omega_k(t') - \Gamma_k(t')) dt' } {\left\lvert {\psi_k(t)} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.1)

where {\left\lvert {\psi_k(t)} \right\rangle} were normalized energy eigenkets for the (slowly) evolving Hamiltonian

\begin{aligned}H(t) {\left\lvert {\psi_k(t)} \right\rangle} = E_k(t) {\left\lvert {\psi_k(t)} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.2)

In the problem sets we were shown a different adiabatic approximation, where are starting point is

\begin{aligned}{\left\lvert {\psi(t)} \right\rangle} = \sum_k c_k(t) {\left\lvert {\psi_k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(1.3)

For completeness, here’s a walk through of the general amplitude derivation that’s been used.

Guts

We operate with our energy identity once again

\begin{aligned}0 &=\left(H - i \hbar \frac{d{{}}}{dt} \right) \sum_k c_k {\left\lvert {k} \right\rangle} \\ &=\sum_k c_k E_k {\left\lvert {k} \right\rangle} - i \hbar c_k' {\left\lvert {k} \right\rangle} - i \hbar c_k {\left\lvert {k'} \right\rangle} ,\end{aligned}

where

\begin{aligned}{\left\lvert {k'} \right\rangle} = \frac{d{{}}}{dt} {\left\lvert {k} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.4)

Bra’ing with {\left\langle {m} \right\rvert}, and split the sum into k = m and k \ne m parts

\begin{aligned}0 =c_m E_m - i \hbar c_m' - i \hbar c_m \left\langle{{m}} \vert {{m'}}\right\rangle - i \hbar \sum_{k \ne m} c_k \left\langle{{m}} \vert {{k'}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.5)

Again writing

\begin{aligned}\Gamma_m = i \left\langle{{m}} \vert {{m'}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.6)

We have

\begin{aligned}c_m' = \frac{1}{{i \hbar}} c_m (E_m - \hbar \Gamma_m) - \sum_{k \ne m} c_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.7)

In this form we can make an “Adiabatic” approximation, dropping the k \ne m terms, and integrate

\begin{aligned}\int \frac{d c_m'}{c_m} = \frac{1}{{i \hbar}} \int_0^t (E_m(t') - \hbar \Gamma_m(t')) dt' \end{aligned} \hspace{\stretch{1}}(2.8)

or

\begin{aligned}c_m(t) = A \exp\left(\frac{1}{{i \hbar}} \int_0^t (E_m(t') - \hbar \Gamma_m(t')) dt' \right).\end{aligned} \hspace{\stretch{1}}(2.9)

Evaluating at t = 0, fixes the integration constant for

\begin{aligned}c_m(t) = c_m(0) \exp\left(\frac{1}{{i \hbar}} \int_0^t (E_m(t') - \hbar \Gamma_m(t')) dt' \right).\end{aligned} \hspace{\stretch{1}}(2.10)

Observe that this is very close to the starting point of the adiabatic approximation we performed in class since we end up with

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k c_k(0) e^{-i \int_0^t (\omega_k(t') - \Gamma_k(t')) dt' } {\left\lvert {k(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.11)

So, to perform the more detailed approximation, that started with 1.1, where we ended up with all the cross terms that had both \omega_k and Berry phase \Gamma_k dependence, we have only to generalize by replacing c_k(0) with c_k(t).

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Evaluating the squared sinc integral.

Posted by peeterjoot on December 10, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation

In the Fermi’s golden rule lecture we used the result for the integral of the squared \text{sinc} function. Here is a reminder of the contours required to perform this integral.

Guts

We want to evaluate

\begin{aligned}\int_{-\infty}^\infty \frac{\sin^2 (x\left\lvert {\mu} \right\rvert)}{x^2} dx\end{aligned} \hspace{\stretch{1}}(1.2.1)

We make a few change of variables

\begin{aligned}\begin{aligned}\int_{-\infty}^\infty \frac{\sin^2 (x\left\lvert {\mu} \right\rvert)}{x^2} dx &= \left\lvert {\mu} \right\rvert \int_{-\infty}^\infty \frac{\sin^2 (y)}{y^2} dy \\ &= -i \left\lvert {\mu} \right\rvert \int_{-\infty}^\infty \frac{(e^{iy} - e^{-iy})^2}{(2 i y)^2} i dy \\ &=-\frac{i \left\lvert {\mu} \right\rvert}{4} \int_{-i\infty}^{i\infty} \frac{e^{2z} + e^{-2z} - 2}{z^2} dz\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.2)

Now we pick a contour that is distorted to one side of the origin as in fig. 1.1

Fig 1.1: Contour distorted to one side of the double pole at the origin

We employ Jordan’s theorem (section 8.12 [1]) now to pick the contours for each of the integrals since we need to ensure the e^{\pm z} terms converges as R \rightarrow \infty for the z = R e^{i\theta} part of the contour. We can write

\begin{aligned}\int_{-\infty}^\infty \frac{\sin^2 (x\left\lvert {\mu} \right\rvert)}{x^2} dx=-\frac{i \left\lvert {\mu} \right\rvert}{4} \left(\int_{C_0 + C_2} \frac{e^{2z}}{z^2} dz-\int_{C_0 + C_1} \frac{e^{-2z}}{z^2} dz-\int_{C_0 + C_1} \frac{2}{z^2} dz\right)\end{aligned} \hspace{\stretch{1}}(1.2.3)

The second two integrals both surround no poles, so we have only the first to deal with

\begin{aligned}\begin{aligned}\int_{C_0 + C_2} \frac{e^{2z}}{z^2} dz &= 2 \pi i \frac{1}{{1!}} {\left.{{ \frac{d}{dz} e^{2z}}}\right\vert}_{{z=0}} \\ &= 4 \pi i \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.4)

Putting everything back together we have

\begin{aligned}\int_{-\infty}^\infty \frac{\sin^2 (x\left\lvert {\mu} \right\rvert)}{x^2} dx &= -\frac{i \left\lvert {\mu} \right\rvert}{4} 4 \pi i \\ &= \pi \left\lvert {\mu} \right\rvert\end{aligned} \hspace{\stretch{1}}(1.2.5)

On the cavalier choice of contours

The choice of which contours to pick above may seem pretty arbitrary, but they are for good reason. Suppose you picked C_0 + C_1 for the first integral. On the big C_1 arc, then with a z = R e^{i \theta} substitution we have

\begin{aligned}\left\lvert {\int_{C_1} \frac{e^{2 z}}{z^2} dz} \right\rvert &= \left\lvert {\int_{\theta = \pi/2}^{-\pi/2} \frac{e^{ 2 R (\cos\theta + i \sin\theta) }}{R^2 e^{ 2 i \theta}}R i e^{i \theta} d\theta} \right\rvert \\ &= \frac{1}{R}\left\lvert {\int_{\theta = \pi/2}^{-\pi/2} e^{ 2 R (\cos\theta + i \sin\theta) }e^{-i \theta} d\theta} \right\rvert \\ &\le \frac{1}{R}\int_{\theta = -\pi/2}^{\pi/2} \left\lvert {e^{ 2 R \cos\theta }} \right\rvert d\theta \\ &\le \frac{\pi e^{2 R}}{R}\end{aligned} \hspace{\stretch{1}}(1.2.6)

This clearly doesn’t have the zero convergence property that we desire. We need to pick the C_2 contour for the first (positive exponent) integral since in that [\pi/2, 3\pi/2] range, \cos\theta is always negative. We can however, use the C_1 contour for the second (negative exponent) integral. Explicitly, again by example, using C_2 contour for the first integral, over that portion of the arc we have

\begin{aligned}\left\lvert {\int_{C_2} \frac{e^{2 z}}{z^2} dz} \right\rvert &= \left\lvert {\int_{\theta = \pi/2}^{3 \pi/2} \frac{e^{ 2 R (\cos\theta + i \sin\theta) }}{R^2 e^{ 2 i \theta}}R i e^{i \theta} d\theta} \right\rvert \\ &= \frac{1}{R}\left\lvert {\int_{\theta = \pi/2}^{3 \pi/2} e^{ 2 R (\cos\theta + i \sin\theta) }e^{-i \theta} d\theta} \right\rvert \\ &\le \frac{1}{R}\int_{\theta = \pi/2}^{3 \pi/2} \left\lvert {e^{ 2 R \cos\theta }d\theta} \right\rvert \\ &\approx \frac{1}{R}\int_{\theta = \pi/2}^{3 \pi/2} \left\lvert {e^{ -2 R }d\theta} \right\rvert \\ &= \frac{\pi e^{-2 R} }{R}\end{aligned} \hspace{\stretch{1}}(1.2.7)

References

[1] W.R. Le Page and W.R. LePage. Complex Variables and the Laplace Transform for Engineers. Courier Dover Publications, 1980.

Posted in Math and Physics Learning. | Tagged: , , , | 5 Comments »

Verifying the Helmholtz Green’s function.

Posted by peeterjoot on December 9, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

In class this week, looking at an instance of the Helmholtz equation

\begin{aligned}\left( \boldsymbol{\nabla}^2 + \mathbf{k}^2\right) \psi_\mathbf{k}(\mathbf{r}) = s(\mathbf{r}).\end{aligned} \hspace{\stretch{1}}(1.1)

We were told that the Green’s function

\begin{aligned}\left( \boldsymbol{\nabla}^2 + \mathbf{k}^2\right) G^0(\mathbf{r}, \mathbf{r}') = \delta(\mathbf{r}- \mathbf{r}')\end{aligned} \hspace{\stretch{1}}(1.2)

that can be used to solve for a particular solution this differential equation via convolution

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) = \int G^0(\mathbf{r}, \mathbf{r}') s(\mathbf{r}') d^3 \mathbf{r}',\end{aligned} \hspace{\stretch{1}}(1.3)

had the value

\begin{aligned}G^0(\mathbf{r}, \mathbf{r}') = - \frac{1}{{4 \pi}} \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} }{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(1.4)

Let’s try to verify this.

Guts

Application of the Helmholtz differential operator \boldsymbol{\nabla}^2 + \mathbf{k}^2 on the presumed solution gives

\begin{aligned}(\boldsymbol{\nabla}^2 + \mathbf{k}^2) \psi_\mathbf{k}(\mathbf{r}) = - \frac{1}{{4 \pi}} \int (\boldsymbol{\nabla}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} }{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}s(\mathbf{r}') d^3 \mathbf{r}'.\end{aligned} \hspace{\stretch{1}}(2.5)

When \mathbf{r} \ne \mathbf{r}'.

To proceed we’ll need to evaluate

\begin{aligned}\boldsymbol{\nabla}^2 \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} }{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.6)

Writing \mu = {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} we start with the computation of

\begin{aligned}\frac{\partial {}}{\partial {x}} \frac{e^{i k \mu} }{\mu}&=\frac{\partial {\mu}}{\partial {x}} \left( \frac{i k}{\mu} - \frac{1}{{\mu^2}} \right) e^{i k \mu} \\ &=\frac{\partial {\mu}}{\partial {x}} \left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu}\end{aligned}

We see that we’ll have

\begin{aligned}\boldsymbol{\nabla} \frac{e^{i k \mu} }{\mu} = \left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu} \boldsymbol{\nabla} \mu.\end{aligned} \hspace{\stretch{1}}(2.7)

Taking second derivatives with respect to x we find

\begin{aligned}\frac{\partial^2 {{}}}{\partial {{x}}^2} \frac{e^{i k \mu} }{\mu}&=\frac{\partial^2 {{\mu}}}{\partial {{x}}^2} \left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu}+\frac{\partial {\mu}}{\partial {x}} \frac{\partial {\mu}}{\partial {x}} \frac{1}{{\mu^2}} \frac{e^{i k \mu}}{\mu}+\left( \frac{\partial {\mu}}{\partial {x}} \right)^2 \left( i k - \frac{1}{{\mu}} \right)^2 \frac{e^{i k \mu}}{\mu} \\ &=\frac{\partial^2 {{\mu}}}{\partial {{x}}^2} \left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu}+\left( \frac{\partial {\mu}}{\partial {x}} \right)^2 \left( -k^2 - \frac{ 2 i k }{\mu} + \frac{2}{\mu^2} \right)\frac{e^{i k \mu}}{\mu}.\end{aligned}

Our Laplacian is then

\begin{aligned}\boldsymbol{\nabla}^2\frac{e^{i k \mu} }{\mu} =\left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu} \boldsymbol{\nabla}^2 \mu+\left( -k^2 - \frac{ 2 i k }{\mu} + \frac{2}{\mu^2} \right)\frac{e^{i k \mu}}{\mu} (\boldsymbol{\nabla} \mu)^2.\end{aligned} \hspace{\stretch{1}}(2.8)

Now lets calculate the derivatives of \mu. Working on x again, we have

\begin{aligned}\frac{\partial {}}{\partial {x}} \mu&=\frac{\partial {}}{\partial {x}} \sqrt{ (x - x')^2 +(y - y')^2 +(z - z')^2 } \\ &=\frac{1}{{2}} 2 (x - x')\frac{1}{{\sqrt{ (x - x')^2 +(y - y')^2 +(z - z')^2 }}} \\ &=\frac{x - x'}{\mu}.\end{aligned}

So we have

\begin{aligned}\boldsymbol{\nabla} \mu &= \frac{\mathbf{r} - \mathbf{r}'}{\mu} \\ (\boldsymbol{\nabla} \mu)^2 &= 1 \end{aligned} \hspace{\stretch{1}}(2.9)

Taking second derivatives with respect to x we find

\begin{aligned}\frac{\partial^2 {{}}}{\partial {{x}}^2} \mu&= \frac{\partial {}}{\partial {x}}\frac{x - x'}{\mu} \\ &= \frac{1}{\mu} - (x - x') \frac{\partial {\mu}}{\partial {x}} \frac{1}{{\mu^2}}\\ &=\frac{1}{\mu} - (x - x') \frac{x - x'}{\mu} \frac{1}{{\mu^2}}\\ &=\frac{1}{\mu} - (x - x')^2 \frac{1}{{\mu^3}}.\end{aligned}

So we find

\begin{aligned}\boldsymbol{\nabla}^2 \mu = \frac{3}{\mu} - \frac{1}{{\mu}},\end{aligned} \hspace{\stretch{1}}(2.11)

or

\begin{aligned}\boldsymbol{\nabla}^2 \mu = \frac{2}{\mu}.\end{aligned} \hspace{\stretch{1}}(2.12)

Inserting this and (\boldsymbol{\nabla} \mu)^2 into 2.8 we find

\begin{aligned}\begin{aligned}\boldsymbol{\nabla}^2\frac{e^{i k \mu} }{\mu} &=\left( i k - \frac{1}{{\mu}} \right) \frac{e^{i k \mu}}{\mu} \frac{2}{\mu}+\left( -k^2 - \frac{ 2 i k }{\mu} + \frac{2}{\mu^2} \right)\frac{e^{i k \mu}}{\mu}&=-k^2 \frac{e^{i k \mu}}{\mu}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.13)

This shows us that provided \mathbf{r} \ne \mathbf{r}' we have

\begin{aligned}(\boldsymbol{\nabla}^2 + \mathbf{k}^2) G^0(\mathbf{r}, \mathbf{r}') = 0.\end{aligned} \hspace{\stretch{1}}(2.14)

In the neighborhood of {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} < \epsilon.

Having shown that we end up with zero everywhere that \mathbf{r} \ne \mathbf{r}' we are left to consider a neighborhood of the volume surrounding the point \mathbf{r} in our integral. Following the Coulomb treatment in section 2.2 of [1] we use a spherical volume element centered around \mathbf{r} of radius \epsilon, and then convert a divergence to a surface area to evaluate the integral away from the problematic point

\begin{aligned}-\frac{1}{{4\pi}} \int_{\text{all space}} (\boldsymbol{\nabla}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}'=-\frac{1}{{4\pi}} \int_{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} < \epsilon} (\boldsymbol{\nabla}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}'\end{aligned} \hspace{\stretch{1}}(2.15)

We make the change of variables \mathbf{r}' = \mathbf{r} + \mathbf{a}. We add an explicit \mathbf{r} suffix to our Laplacian at the same time to remind us that it is taking derivatives with respect to the coordinates of \mathbf{r} = (x, y, z), and not the coordinates of our integration variable \mathbf{a} = (a_x, a_y, a_z). Assuming sufficient continuity and “well behavedness” of s(\mathbf{r}') we’ll be able to pull it out of the integral, giving

\begin{aligned}-\frac{1}{{4\pi}} \int_{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} < \epsilon} (\boldsymbol{\nabla}_\mathbf{r}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}'&= -\frac{1}{4\pi} \int_{{\left\lvert{\mathbf{a}}\right\rvert} < \epsilon} (\boldsymbol{\nabla}_\mathbf{r}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} s(\mathbf{r} + \mathbf{a}) d^3 \mathbf{a} \\ &= -\frac{s(\mathbf{r})}{4\pi} \int_{{\left\lvert{\mathbf{a}}\right\rvert} < \epsilon} (\boldsymbol{\nabla}_\mathbf{r}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} d^3 \mathbf{a} \end{aligned}

Recalling the dependencies on the derivatives of {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} in our previous gradient evaluations, we note that we have

\begin{aligned}\boldsymbol{\nabla}_\mathbf{r} {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} &= -\boldsymbol{\nabla}_\mathbf{a} {\left\lvert{\mathbf{a}}\right\rvert} \\ (\boldsymbol{\nabla}_\mathbf{r} {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert})^2 &= (\boldsymbol{\nabla}_\mathbf{a} {\left\lvert{\mathbf{a}}\right\rvert})^2 \\ \boldsymbol{\nabla}_\mathbf{r}^2 {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} &= \boldsymbol{\nabla}_\mathbf{a}^2 {\left\lvert{\mathbf{a}}\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.16)

so with \mathbf{a} = \mathbf{r} - \mathbf{r}', we can rewrite our Laplacian as

\begin{aligned}\boldsymbol{\nabla}_\mathbf{r}^2 \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} = \boldsymbol{\nabla}_\mathbf{a}^2 \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} = \boldsymbol{\nabla}_\mathbf{a} \cdot \left(\boldsymbol{\nabla}_\mathbf{a} \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} \right)\end{aligned} \hspace{\stretch{1}}(2.19)

This gives us

\begin{aligned}-\frac{s(\mathbf{r})}{4\pi} \int_{{\left\lvert{\mathbf{a}}\right\rvert} < \epsilon} (\boldsymbol{\nabla}_\mathbf{a}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} d^3 \mathbf{a} &=-\frac{s(\mathbf{r})}{4\pi} \int_{dV} \boldsymbol{\nabla}_\mathbf{a} \cdot \left( \boldsymbol{\nabla}_\mathbf{a} \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} \right) d^3 \mathbf{a} -\frac{s(\mathbf{r})}{4\pi} \int_{dV}\mathbf{k}^2 \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} d^3 \mathbf{a}  \\ &=-\frac{s(\mathbf{r})}{4\pi} \int_{dA} \left( \boldsymbol{\nabla}_\mathbf{a} \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} \right) \cdot \hat{\mathbf{a}} d^2 \mathbf{a} -\frac{s(\mathbf{r})}{4\pi} \int_{dV}\mathbf{k}^2 \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} d^3 \mathbf{a} \end{aligned}

To complete these evaluations, we can now employ a spherical coordinate change of variables. Let’s do the \mathbf{k}^2 volume integral first. We have

\begin{aligned}\int_{dV}\mathbf{k}^2 \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} d^3 \mathbf{a} &=\int_{a = 0}^\epsilon \int_{\theta = 0}^\pi \int_{\phi=0}^{2\pi}\mathbf{k}^2 \frac{e^{i k a}}{a} a^2 da \sin\theta d\theta d\phi \\ &=4\pi k^2\int_{a = 0}^\epsilon a e^{i k a} da  \\ &=4\pi \int_{u = 0}^{k\epsilon}u e^{i u} du  \\ &=4\pi {\left.(-i u + 1) e^{i u} \right\vert}_0^{k \epsilon} \\ &=4 \pi \left( (-i k \epsilon + 1)e^{i k \epsilon} - 1 \right)\end{aligned}

To evaluate the surface integral we note that we’ll require only the radial portion of the gradient, so have

\begin{aligned}\left( \boldsymbol{\nabla}_\mathbf{a} \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} \right) \cdot \hat{\mathbf{a}}&=\left( \hat{\mathbf{a}} \frac{\partial {}}{\partial {a}} \frac{e^{i k a}}{a} \right) \cdot \hat{\mathbf{a}} \\ &=\frac{\partial {}}{\partial {a}} \frac{e^{i k a}}{a} \\ &=\left( i k \frac{1}{{a}} - \frac{1}{{a^2}} \right)e^{i k a} \\ &=\left( i k a - 1 \right)\frac{e^{i k a}}{a^2}\end{aligned}

Our area element is a^2 \sin\theta d\theta d\phi, so we are left with

\begin{aligned}\begin{aligned}\int_{dA} \left( \boldsymbol{\nabla}_\mathbf{a} \frac{e^{i k {\left\lvert{\mathbf{a}}\right\rvert}}}{{\left\lvert{\mathbf{a}}\right\rvert}} \right) \cdot \hat{\mathbf{a}} d^2 \mathbf{a} &={\left.{{\int_{\theta = 0}^\pi \int_{\phi=0}^{2\pi}\left( i k a - 1 \right)\frac{e^{i k a}}{a^2}a^2 \sin\theta d\theta d\phi }}\right\vert}_{{a = \epsilon}}\\ &=4 \pi\left( i k \epsilon - 1 \right) e^{i k \epsilon}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.20)

Putting everything back together we have

\begin{aligned}-\frac{1}{{4\pi}} \int_{\text{all space}} (\boldsymbol{\nabla}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}'&=-s(\mathbf{r})\left((-i k \epsilon + 1)e^{i k \epsilon} - 1 +\left( i k \epsilon - 1 \right) e^{i k \epsilon}\right) \\ &=-s(\mathbf{r})\left((-i k \epsilon + 1 + i k \epsilon - 1 )e^{i k \epsilon} - 1 \right) \end{aligned}

But this is just

\begin{aligned}-\frac{1}{{4\pi}} \int_{\text{all space}} (\boldsymbol{\nabla}^2 + \mathbf{k}^2) \frac{e^{i k {\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}' = s(\mathbf{r}).\end{aligned} \hspace{\stretch{1}}(2.21)

This completes the desired verification of the Green’s function for the Helmholtz operator. Observe the perfect cancellation here, so the limit of \epsilon \rightarrow 0 can be independent of how large k is made. You have to complete the integrals for both the Laplacian and the \mathbf{k}^2 portions of the integrals and add them, before taking any limits, or else you’ll get into trouble (as I did in my first attempt).

References

[1] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

A short derivation of the time dependent pertubation result.

Posted by peeterjoot on December 9, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Guts

A super short derivation of the time dependent pertubation result. With

\begin{aligned}{\left\lvert {\psi{t}} \right\rangle} = \sum_k c_k(t) e^{-i\omega_k t} {\left\lvert {k} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.1)

\begin{aligned}0&=\left( H_0 + H' - i\hbar \frac{d}{dt} \right){\left\lvert {\psi{t}} \right\rangle} \\ &=\left( H_0 + H' - i\hbar \frac{d}{dt} \right)\sum_k c_k e^{-i\omega_k t} {\left\lvert {k} \right\rangle} \\ &=\sum_k e^{-i\omega_k t} \left(\not{{c_k E_k}} + H' c_k - \not{{i\hbar (-i \omega_k) c_k}} -i\hbar c_k'\right){\left\lvert {k} \right\rangle}\end{aligned}

Bra with {\left\langle {m} \right\rvert}

\begin{aligned}\sum_k e^{-i\omega_k t} H'_{mk} c_k =i\hbar e^{-i\omega_m t} c_m',\end{aligned} \hspace{\stretch{1}}(1.2)

or

\begin{aligned}c_m'=\frac{1}{{i\hbar}}\sum_k e^{-i\omega_{km} t} H'_{mk} c_k \end{aligned} \hspace{\stretch{1}}(1.3)

Now we can make the assumptions about the initial state and away we go.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

One more adiabatic pertubation derivation.

Posted by peeterjoot on December 8, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

I liked one of the adiabatic pertubation derivations that I did to review the material, and am recording it for reference.

Build up.

In time dependent pertubation we started after noting that our ket in the interaction picture, for a Hamiltonian H = H_0 + H'(t), took the form

\begin{aligned}{\left\lvert {\alpha_S(t)} \right\rangle} = e^{-i H_0 t/\hbar} {\left\lvert {\alpha_I(t)} \right\rangle} = e^{-i H_0 t/\hbar} U_I(t) {\left\lvert {\alpha_I(0)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here we have basically assumed that the time evolution can be factored into a portion dependent on only the static portion of the Hamiltonian, with some other operator U_I(t), providing the remainder of the time evolution. From 2.1 that operator U_I(t) is found to behave according to

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} = e^{i H_0 t/\hbar} H'(t) e^{-i H_0 t/\hbar} U_I,\end{aligned} \hspace{\stretch{1}}(2.2)

but for our purposes we just assumed it existed, and used this for motivation. With the assumption that the interaction picture kets can be written in terms of the basis kets for the system at t=0 we write our Schr\”{o}dinger ket as

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i H_0 t/\hbar} a_k(t) {\left\lvert {k} \right\rangle}= \sum_k e^{-i \omega_k t/\hbar} a_k(t) {\left\lvert {k} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.3)

where {\left\lvert {k} \right\rangle} are the energy eigenkets for the initial time equation problem

\begin{aligned}H_0 {\left\lvert {k} \right\rangle} = E_k^0 {\left\lvert {k} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.4)

Adiabatic case.

For the adiabatic problem, we assume the system is changing very slowly, as described by the instantanious energy eigenkets

\begin{aligned}H(t) {\left\lvert {k(t)} \right\rangle} = E_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.5)

Can we assume a similar representation to 2.3 above, but allow {\left\lvert {k} \right\rangle} to vary in time? This doesn’t quite work since {\left\lvert {k(t)} \right\rangle} are no longer eigenkets of H_0

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i H_0 t/\hbar} a_k(t) {\left\lvert {k(t)} \right\rangle}\ne \sum_k e^{-i \omega_k t} a_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.6)

Operating with e^{i H_0 t/\hbar} does not give the proper time evolution of {\left\lvert {k(t)} \right\rangle}, and we will in general have a more complex functional dependence in our evolution operator for each {\left\lvert {k(t)} \right\rangle}. Instead of an \omega_k t dependence in this time evolution operator let’s assume we have some function \alpha_k(t) to be determined, and can write our ket as

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{-i \alpha_k(t)} a_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.7)

Operating on this with our energy operator equation we have

\begin{aligned}0 &=\left(H - i \hbar \frac{d}{dt} \right) {\left\lvert {\psi} \right\rangle} \\ &=\left(H - i \hbar \frac{d}{dt} \right) \sum_k e^{-i \alpha_k} a_k {\left\lvert {k} \right\rangle} \\ &=\sum_k e^{-i \alpha_k(t)} \left( \left( E_k a_k-i \hbar (-i \alpha_k' a_k + a_k')\right) {\left\lvert {k} \right\rangle}-i \hbar a_k {\left\lvert {k'} \right\rangle}\right) \\ \end{aligned}

Here I’ve written {\left\lvert {k'} \right\rangle} = d{\left\lvert {k} \right\rangle}/dt. In our original time dependent pertubaton the -i \alpha_k' term was -i \omega_k, so this killed off the E_k. If we assume this still kills off the E_k, we must have

\begin{aligned}\alpha_k = \frac{1}{{\hbar}} \int_0^t E_k(t') dt',\end{aligned} \hspace{\stretch{1}}(3.8)

and are left with

\begin{aligned}0=\sum_k e^{-i \alpha_k(t)} \left( a_k' {\left\lvert {k} \right\rangle}+a_k {\left\lvert {k'} \right\rangle}\right).\end{aligned} \hspace{\stretch{1}}(3.9)

Bra’ing with {\left\langle {m} \right\rvert} we have

\begin{aligned}0=e^{-i \alpha_m(t)} a_m' +e^{-i \alpha_m(t)} a_m \left\langle{{m}} \vert {{m'}}\right\rangle+\sum_{k \ne m} e^{-i \alpha_k(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.10)

or

\begin{aligned}a_m' +a_m \left\langle{{m}} \vert {{m'}}\right\rangle=-\sum_{k \ne m} e^{-i \alpha_k(t)} e^{i \alpha_m(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.11)

The LHS is a perfect differential if we introduce an integration factor e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle}, so we can write

\begin{aligned}e^{-\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle} ( a_m e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } )'=-\sum_{k \ne m} e^{-i \alpha_k(t)} e^{i \alpha_m(t)} a_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.12)

This suggests that we want to form a new function

\begin{aligned}b_m = a_m e^{\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } \end{aligned} \hspace{\stretch{1}}(3.13)

or

\begin{aligned}a_m = b_m e^{-\int_0^t \left\langle{{m}} \vert {{m'}}\right\rangle } \end{aligned} \hspace{\stretch{1}}(3.14)

Plugging this into our assumed representation we have a more concrete form

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- \int_0^t dt' ( i \omega_k + \left\langle{{k}} \vert {{k'}}\right\rangle ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.15)

Writing

\begin{aligned}\Gamma_k = i \left\langle{{k}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.16)

this becomes

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.17)

A final pass.

Now that we have what appears to be a good representation for any given state if we wish to examine the time evolution, let’s start over, reapplying our instantaneous energy operator equality

\begin{aligned}0 &=\left(H - i \hbar \frac{d}{dt} \right){\left\lvert {\psi} \right\rangle}  \\ &=\left(H - i \hbar \frac{d}{dt} \right)\sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k {\left\lvert {k} \right\rangle} \\ &=- i \hbar \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } \left(i \Gamma_kb_k {\left\lvert {k} \right\rangle} +b_k' {\left\lvert {k} \right\rangle} +b_k {\left\lvert {k'} \right\rangle} \right).\end{aligned}

Bra’ing with {\left\langle {m} \right\rvert} we find

\begin{aligned}0&=e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } i \Gamma_mb_m +e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } b_m' \\ &+e^{- i\int_0^t dt' ( \omega_m - \Gamma_m ) } b_m \left\langle{{m}} \vert {{m'}}\right\rangle +\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle \end{aligned}

Since i \Gamma_m = \left\langle{{m}} \vert {{m'}}\right\rangle the first and third terms cancel leaving us just

\begin{aligned}b_m'=-\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_{km} - \Gamma_{km} ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(3.18)

where \omega_{km} = \omega_k - \omega_m and \Gamma_{km} = \Gamma_k - \Gamma_m.

Summary

We assumed that a ket for the system has a representation in the form

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i \alpha_k(t) } a_k(t) {\left\lvert {k(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(4.20)

where a_k(t) and \alpha_k(t) are given or to be determined. Application of our energy operator identity provides us with an alternate representation that simplifes the results

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_k e^{- i\int_0^t dt' ( \omega_k - \Gamma_k ) } b_k(t) {\left\lvert {k(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(4.20)

With

\begin{aligned}{\left\lvert {m'} \right\rangle} &= \frac{d}{dt} {\left\lvert {m} \right\rangle} \\ \Gamma_k &= i \left\langle{{m}} \vert {{m'}}\right\rangle \\ \omega_{km} &= \omega_k - \omega_m \\ \Gamma_{km} &= \Gamma_k - \Gamma_m\end{aligned} \hspace{\stretch{1}}(4.21)

we find that our dynamics of the coefficients are related by

\begin{aligned}b_m'=-\sum_{k \ne m}e^{- i\int_0^t dt' ( \omega_{km} - \Gamma_{km} ) } b_k \left\langle{{m}} \vert {{k'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(4.25)

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 25 (Taught by Prof J.E. Sipe). Born approximation.

Posted by peeterjoot on December 7, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Born approximation.

READING: section 20 [1]

We’ve been arguing that we can write the stationary equation

\begin{aligned}\left( \boldsymbol{\nabla}^2 + \mathbf{k}^2\right) \psi_\mathbf{k}(\mathbf{r}) = s(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.1)

with

\begin{aligned}s(\mathbf{r}) = \frac{2\mu}{\hbar^2} V(\mathbf{r}) \psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) = \psi_\mathbf{k}^{\text{homogeneous}}(\mathbf{r}) + \psi_\mathbf{k}^{\text{particular}}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.3)

Introduce Green function

\begin{aligned}\left( \boldsymbol{\nabla}^2 + \mathbf{k}^2\right) G^0(\mathbf{r}, \mathbf{r}') = \delta(\mathbf{r}- \mathbf{r}')\end{aligned} \hspace{\stretch{1}}(2.4)

Suppose that I can find G^0(\mathbf{r}, \mathbf{r}'), then

\begin{aligned}\psi_\mathbf{k}^{\text{particular}}(\mathbf{r}) = \int G^0(\mathbf{r}, \mathbf{r}') s(\mathbf{r}') d^3 \mathbf{r}'\end{aligned} \hspace{\stretch{1}}(2.5)

It turns out that finding the Green’s function G^0(\mathbf{r}, \mathbf{r}') is not so hard. Note the following, for k = 0, we have

\begin{aligned}\boldsymbol{\nabla}^2 G^0_0(\mathbf{r}, \mathbf{r}') = \delta(\mathbf{r} - \mathbf{r}')\end{aligned} \hspace{\stretch{1}}(2.6)

(where a zero subscript is used to mark the k = 0 case). We know this Green’s function from electrostatics, and conclude that

\begin{aligned}G^0_0(\mathbf{r}, \mathbf{r}') = - \frac{1}{{4 \pi}} \frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}\end{aligned} \hspace{\stretch{1}}(2.7)

For \mathbf{r} \ne \mathbf{r}' we can easily show that

\begin{aligned}G^0(\mathbf{r}, \mathbf{r}') = - \frac{1}{{4 \pi}} \frac{e^{i k{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(2.8)

This is correct for all \mathbf{r} because it also gives the right limit as \mathbf{r} \rightarrow \mathbf{r}'. This argument was first given by Lorentz. We can now write our particular solution

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) = e^{i \mathbf{k} \cdot \mathbf{r}}- \frac{1}{{4 \pi}} \int \frac{e^{i k{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} s(\mathbf{r}') d^3 \mathbf{r}'\end{aligned} \hspace{\stretch{1}}(2.9)

This is of no immediate help since we don’t know \psi_\mathbf{k}(\mathbf{r}) and that is embedded in s(\mathbf{r}).

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) = e^{i \mathbf{k} \cdot \mathbf{r}}- \frac{2 \mu}{4 \pi \hbar^2} \int \frac{e^{i k{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}}}{{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert}} V(\mathbf{r}') \psi_\mathbf{k}(\mathbf{r}') d^3 \mathbf{r}'\end{aligned} \hspace{\stretch{1}}(2.10)

Now look at this for \mathbf{r} \gg \mathbf{r}'

\begin{aligned}{\left\lvert{\mathbf{r} - \mathbf{r}'}\right\rvert} &= \left( \mathbf{r}^2 + (\mathbf{r}')^2 - 2 \mathbf{r} \cdot \mathbf{r}'\right)^{1/2} \\ &=r \left( 1 + \frac{(\mathbf{r}')^2}{\mathbf{r}^2} - 2 \frac{1}{{\mathbf{r}^2}} \mathbf{r} \cdot \mathbf{r}'\right)^{1/2} \\ &=r \left( 1 - \frac{1}{{2}} \frac{2}{\mathbf{r}^2} \mathbf{r} \cdot \mathbf{r}'+ O\left(\frac{r'}{r}\right)^2\right)^{1/2} \\ &=r - \hat{\mathbf{r}} \cdot \mathbf{r}'+ O\left(\frac{{r'}^2}{r}\right)\end{aligned}

We get

\begin{aligned}\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) &\rightarrow e^{i \mathbf{k} \cdot \mathbf{r}} - \frac{2 \mu}{4 \pi \hbar^2} \frac{ e^{i k r}}{r} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') \psi_\mathbf{k}(\mathbf{r}') d^3 \mathbf{r}' \\ &=e^{i \mathbf{k} \cdot \mathbf{r}} + f_\mathbf{k}(\theta, \phi) \frac{ e^{i k r}}{r},\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.11)

where

\begin{aligned}f_\mathbf{k}(\theta, \phi) =- \frac{\mu}{2 \pi \hbar^2} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') \psi_\mathbf{k}(\mathbf{r}') d^3 \mathbf{r}' \end{aligned} \hspace{\stretch{1}}(2.12)

If the scattering is weak we have the Born approximation

\begin{aligned}f_\mathbf{k}(\theta, \phi) =- \frac{\mu}{2 \pi \hbar^2} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') e^{i \mathbf{k} \cdot \mathbf{r}'} d^3 \mathbf{r}',\end{aligned} \hspace{\stretch{1}}(2.13)

or

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) =e^{i \mathbf{k} \cdot \mathbf{r}} - \frac{\mu}{2 \pi \hbar^2} \frac{ e^{i k r}}{r} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') e^{i \mathbf{k} \cdot \mathbf{r}'} d^3 \mathbf{r}'.\end{aligned} \hspace{\stretch{1}}(2.14)

Should we wish to make a further approximation, we can take the wave function resulting from application of the Born approximation, and use that a second time. This gives us the “Born again” approximation of

\begin{aligned}\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) &=e^{i \mathbf{k} \cdot \mathbf{r}} - \frac{\mu}{2 \pi \hbar^2} \frac{ e^{i k r}}{r} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') \left( e^{i \mathbf{k} \cdot \mathbf{r}'} - \frac{\mu}{2 \pi \hbar^2} \frac{ e^{i k r'}}{r'} \int e^{-i k \hat{\mathbf{r}}' \cdot \mathbf{r}''} V(\mathbf{r}'') e^{i \mathbf{k} \cdot \mathbf{r}''} d^3 \mathbf{r}''\right) d^3 \mathbf{r}' \\ &=e^{i \mathbf{k} \cdot \mathbf{r}} - \frac{\mu}{2 \pi \hbar^2} \frac{ e^{i k r}}{r} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') e^{i \mathbf{k} \cdot \mathbf{r}'} d^3 \mathbf{r}' \\ &\quad +\frac{\mu^2}{(2 \pi)^2 \hbar^4}\frac{ e^{i k r}}{r} \int e^{-i k \hat{\mathbf{r}} \cdot \mathbf{r}'} V(\mathbf{r}') \frac{ e^{i k r'}}{r'} \int e^{-i k \hat{\mathbf{r}}' \cdot \mathbf{r}''} V(\mathbf{r}'') e^{i \mathbf{k} \cdot \mathbf{r}''} d^3 \mathbf{r}'' d^3 \mathbf{r}'.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.15)

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture L24 (Taught by Prof J.E. Sipe). 3D Scattering cross sections (cont.)

Posted by peeterjoot on December 5, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Scattering cross sections.

READING: section 20 [1]

Recall that we are studing the case of a potential that is zero outside of a fixed bound, V(\mathbf{r}) = 0 for r > r_0, as in figure (\ref{fig:qmTwoL24:qmTwoL22fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL22fig5}
\caption{Bounded potential.}
\end{figure}

and were looking for solutions to Schr\”{o}dinger’s equation

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=\frac{\hbar^2 \mathbf{k}^2}{2 \mu}\psi_\mathbf{k}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.1)

in regions of space, where r > r_0 is very large. We found

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) \sim e^{i \mathbf{k} \cdot \mathbf{r}} + \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.2)

For r \le r_0 this will be something much more complicated.

To study scattering we’ll use the concept of probability flux as in electromagnetism

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{j} + \dot{\rho} = 0\end{aligned} \hspace{\stretch{1}}(2.3)

Using

\begin{aligned}\psi(\mathbf{r}, t) =\psi_\mathbf{k}(\mathbf{r})^{*}\psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.4)

we find

\begin{aligned}\mathbf{j}(\mathbf{r}, t) = \frac{\hbar}{2 \mu i} \Bigl(\psi_\mathbf{k}(\mathbf{r})^{*} \boldsymbol{\nabla} \psi_\mathbf{k}(\mathbf{r})- (\boldsymbol{\nabla} \psi_\mathbf{k}^{*}(\mathbf{r})) \psi_\mathbf{k}(\mathbf{r})\Bigr)\end{aligned} \hspace{\stretch{1}}(2.5)

when

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=i \hbar \frac{\partial {\psi_\mathbf{k}(\mathbf{r})}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(2.6)

In a fashion similar to what we did in the 1D case, let’s suppose that we can write our wave function

\begin{aligned}\psi(\mathbf{r}, t_{\text{initial}}) = \int d^3k \alpha(\mathbf{k}, t_{\text{initial}}) \psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.7)

and treat the scattering as the scattering of a plane wave front (idealizing a set of wave packets) off of the object of interest as depicted in figure (\ref{fig:qmTwoL24:qmTwoL24fig3})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig3}
\caption{plane wave front incident on particle}
\end{figure}

We assume that our incoming particles are sufficiently localized in k space as depicted in the idealized representation of figure (\ref{fig:qmTwoL24:qmTwoL24fig4})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig4}
\caption{k space localized wave packet}
\end{figure}

we assume that \alpha(\mathbf{k}, t_{\text{initial}}) is localized.

\begin{aligned}\psi(\mathbf{r}, t_{\text{initial}}) =\int d^3k\left(\alpha(\mathbf{k}, t_{\text{initial}})e^{i k_z z}+\alpha(\mathbf{k}, t_{\text{initial}}) \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi)\right)\end{aligned} \hspace{\stretch{1}}(2.8)

We suppose that

\begin{aligned}\alpha(\mathbf{k}, t_{\text{initial}}) = \alpha(\mathbf{k}) e^{-i \hbar k^2 t_{\text{initial}}/ 2\mu}\end{aligned} \hspace{\stretch{1}}(2.9)

where this is chosen (\alpha(\mathbf{k}, t_{\text{initial}}) is built in this fashion) so that this is non-zero for z large in magnitude and negative.

This last integral can be approximated

\begin{aligned}\begin{aligned}\int d^3k\alpha(\mathbf{k}, t_{\text{initial}}) \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi)&\approx\frac{f_{\mathbf{k}_0}(\theta, \phi)}{r}\int d^3k\alpha(\mathbf{k}, t_{\text{initial}}) e^{i k r} \\ &\rightarrow 0\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.10)

This is very much like the 1D case where we found no reflected component for our initial time.

We’ll normally look in a locality well away from the wave front as indicted in figure (\ref{fig:qmTwoL24:qmTwoL24fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig5}
\caption{point of measurement of scattering cross section}
\end{figure}

There are situations where we do look in the locality of the wave front that has been scattered.

Our income wave is of the form

\begin{aligned}\psi_i = A e^{i k z} e^{-i \hbar k^2 t/2 \mu}\end{aligned} \hspace{\stretch{1}}(2.11)

Here we’ve made the approximation that k = {\left\lvert{\mathbf{k}}\right\rvert} \sim k_z. We can calculate the probability current

\begin{aligned}\mathbf{j} = \hat{\mathbf{z}} \frac{\hbar k}{\mu} A\end{aligned} \hspace{\stretch{1}}(2.12)

(notice the v = p/m like term above, with p = \hbar k).

For the scattered wave (dropping A factor)

\begin{aligned}\mathbf{j} &=\frac{\hbar}{2 \mu i}\left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} \boldsymbol{\nabla} \left(f_\mathbf{k}(\theta, \phi) \frac{e^{i k r}}{r}\right)-\boldsymbol{\nabla} \left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r}\right)f_\mathbf{k}(\theta, \phi) \frac{e^{i k r}}{r}\right)\\ &\approx\frac{\hbar}{2 \mu i}\left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} i k \hat{\mathbf{r}} f_\mathbf{k}(\theta, \phi)\frac{e^{i k r}}{r}-f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} (-i k \hat{\mathbf{r}}) f_\mathbf{k}(\theta, \phi)\frac{e^{i k r}}{r}\right)\end{aligned}

We find that the radial portion of the current density is

\begin{aligned}\hat{\mathbf{r}} \cdot \mathbf{j}&= \frac{\hbar}{2 \mu i} {\left\lvert{f}\right\rvert}^2 \frac{ 2 i k }{r^2} \\ &= \frac{\hbar k}{\mu} \frac{1}{{r^2}} {\left\lvert{f}\right\rvert}^2,\end{aligned}

and the flux through our element of solid angle is

\begin{aligned}\hat{\mathbf{r}} dA \cdot \mathbf{j}&=\frac{\text{probability}}{\text{unit area per time}} \times \text{area}  \\ &= \frac{\text{probability}}{\text{unit time}} \\ &=\frac{\hbar k}{\mu} \frac{{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2}{r^2} r^2 d\Omega \\ &=\frac{\hbar k }{\mu}{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2 d\Omega \\ &=j_{\text{incoming}}\underbrace{{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2}_{d\sigma/d\Omega} d\Omega.\end{aligned}

We identify the scattering cross section above

\begin{aligned}\frac{d\sigma}{d\Omega}={\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.13)

\begin{aligned}\sigma = \int {\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2 d\Omega\end{aligned} \hspace{\stretch{1}}(2.14)

We’ve been somewhat unrealistic here since we’ve used a plane wave approximation, and can as in figure (\ref{fig:qmTwoL24:qmTwoL24fig6})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig6}
\caption{Plane wave vs packet wave front}
\end{figure}

will actually produce the same answer. For details we are referred to [2] and [3].

Working towards a solution

We’ve done a bunch of stuff here but are not much closer to a real solution because we don’t actually know what f_\mathbf{k} is.

Let’s write Schr\”{o}dinger

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=\frac{\hbar^2 \mathbf{k}^2}{2 \mu}\psi_\mathbf{k}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.15)

instead as

\begin{aligned}(\boldsymbol{\nabla}^2 + \mathbf{k}^2)\psi_\mathbf{k}(\mathbf{r})= s(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.16)

where

\begin{aligned}s(\mathbf{r}) = \frac{2\mu}{\hbar} V(\mathbf{r}) \psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.17)

where s(\mathbf{r}) is really the particular solution to this differential problem. We want

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) =\psi_\mathbf{k}^{\text{homogeneous}}(\mathbf{r})+ \psi_\mathbf{k}^{\text{particular}}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.18)

and

\begin{aligned}\psi_\mathbf{k}^{\text{homogeneous}}(\mathbf{r}) = e^{i \mathbf{k} \cdot \mathbf{r}}\end{aligned} \hspace{\stretch{1}}(2.19)

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one. Dover Publications New York, 1999.

[3] JR Taylor. {\em Scattering Theory: the Quantum Theory of Nonrelativistic Scattering}, volume 1. 1972.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »