Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘laplacian’

PHY450H1S. Relativistic Electrodynamics Lecture 18 (Taught by Prof. Erich Poppitz). Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 12, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 136-146: continued reminder of electrostatic Green’s function (136); the retarded Green’s function of the d’Alembert operator: derivation and properties (137-140); the solution of the d’Alembert equation with a source: retarded potentials (141-142)

Solving the forced wave equation.

See the notes for a complex variables and Fourier transform method of deriving the Green’s function. In class, we’ll just pull it out of a magic hat. We wish to solve

\begin{aligned}\square A^k = \partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(2.1)

(with a \partial_i A^i = 0 gauge choice).

Our Green’s method utilizes

\begin{aligned}\square_{(\mathbf{x}, t)} G(\mathbf{x} - \mathbf{x}', t - t') = \delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t')\end{aligned} \hspace{\stretch{1}}(2.2)

If we know such a function, our solution is simple to obtain

\begin{aligned}A^k(\mathbf{x}, t)= \int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t') G(\mathbf{x} - \mathbf{x}', t - t')\end{aligned} \hspace{\stretch{1}}(2.3)

Proof:

\begin{aligned}\square_{(\mathbf{x}, t)} A^k(\mathbf{x}, t)&=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\square_{(\mathbf{x}, t)}G(\mathbf{x} - \mathbf{x}', t - t') \\ &=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t') \\ &=\frac{4 \pi}{c} j^k(\mathbf{x}, t)\end{aligned}

Claim:

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned} \hspace{\stretch{1}}(2.4)

This is the retarded Green’s function of the operator \square, where

\begin{aligned}\square G(\mathbf{x}, t) = \delta^3(\mathbf{x}) \delta(t)\end{aligned} \hspace{\stretch{1}}(2.5)

Proof of the d’Alembertian Green’s function

Our Prof is excellent at motivating any results that he pulls out of magic hats. He’s said that he’s included a derivation using Fourier transforms and tricky contour integration arguments in the class notes for anybody who is interested (and for those who also know how to do contour integration). For those who don’t know contour integration yet (some people are taking it concurrently), one can actually prove this by simply applying the wave equation operator to this function. This treats the delta function as a normal function that one can take the derivatives of, something that can be well defined in the context of generalized functions. Chugging ahead with this approach we have

\begin{aligned}\square G(\mathbf{x}, t)=\left(\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta\right)\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi c^2 {\left\lvert{\mathbf{x}}\right\rvert} }- \Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }.\end{aligned} \hspace{\stretch{1}}(2.6)

This starts things off and now things get a bit hairy. It’s helpful to consider a chain rule expansion of the Laplacian

\begin{aligned}\Delta (u v)&=\partial_{\alpha\alpha} (u v) \\ &=\partial_{\alpha} (v \partial_\alpha u+ u\partial_\alpha v) \\ &=(\partial_\alpha v) (\partial_\alpha u ) + v \partial_{\alpha\alpha} u+(\partial_\alpha u) (\partial_\alpha v ) + u \partial_{\alpha\alpha} v).\end{aligned}

In vector form this is

\begin{aligned}\Delta (u v) = u \Delta v + 2 (\boldsymbol{\nabla} u) \cdot (\boldsymbol{\nabla} v) + v \Delta u.\end{aligned} \hspace{\stretch{1}}(2.7)

Applying this to the Laplacian portion of 2.6 we have

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)\Delta\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}+\left(\boldsymbol{\nabla} \frac{1}{{2 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\right)\cdot\left(\boldsymbol{\nabla}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \right)+\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\Delta\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right).\end{aligned} \hspace{\stretch{1}}(2.8)

Here we make the identification

\begin{aligned}\Delta \frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }} = - \delta^3(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.9)

This could be considered a given from our knowledge of electrostatics, but it’s not too much work to just do so.

An aside. Proving the Laplacian Green’s function.

If -1/{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} } is a Green’s function for the Laplacian, then the Laplacian of the convolution of this with a test function should recover that test function

\begin{aligned}\Delta \int d^3 \mathbf{x}' \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right) f(\mathbf{x}') = f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.10)

We can directly evaluate the LHS of this equation, following the approach in [2]. First note that the Laplacian can be pulled into the integral and operates only on the presumed Green’s function. For that operation we have

\begin{aligned}\Delta \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right)=-\frac{1}{{4 \pi}} \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(2.11)

It will be helpful to compute the gradient of various powers of {\left\lvert{\mathbf{x}}\right\rvert}

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}^a&=e_\alpha \partial_\alpha (x^\beta x^\beta)^{a/2} \\ &=e_\alpha \left(\frac{a}{2}\right) 2 x^\beta {\delta_\beta}^\alpha {\left\lvert{\mathbf{x}}\right\rvert}^{a - 2}.\end{aligned}

In particular we have, when \mathbf{x} \ne 0, this gives us

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} &= \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &= -\frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^3}} &= -3 \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^5}.\end{aligned} \hspace{\stretch{1}}(2.12)

For the Laplacian of 1/{\left\lvert{\mathbf{x}}\right\rvert}, at the points \mathbf{e} \ne 0 where this is well defined we have

\begin{aligned}\Delta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &=\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} \\ &= -\partial_\alpha \frac{x^\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - x^\alpha \partial_\alpha \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - \mathbf{x} \cdot \boldsymbol{\nabla} \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} + 3 \frac{\mathbf{x}^2}{{\left\lvert{\mathbf{x}}\right\rvert}^5}\end{aligned}

So we have a zero. This means that the Laplacian operation

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \Delta \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}},\end{aligned} \hspace{\stretch{1}}(2.15)

can only have a value in a neighborhood of point \mathbf{x}. Writing \Delta = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla} \cdot -\frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.16)

Observing that \boldsymbol{\nabla} \cdot f(\mathbf{x} -\mathbf{x}') = -\boldsymbol{\nabla}' f(\mathbf{x} - \mathbf{x}') we can put this in a form that allows for use of Stokes theorem so that we can convert this to a surface integral

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla}' \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^2 \mathbf{x}' \mathbf{n} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= \int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\mathbf{x}' - \mathbf{x}}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= -\int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\epsilon^2}{\epsilon^4}\end{aligned}

where we use (\mathbf{x}' - \mathbf{x})/{\left\lvert{\mathbf{x}' - \mathbf{x}}\right\rvert} as the outwards normal for a sphere centered at \mathbf{x} of radius \epsilon. This integral is just -4 \pi, so we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{-4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.17)

The convolution of f(\mathbf{x}) with -\Delta/4 \pi {\left\lvert{\mathbf{x}}\right\rvert} produces f(\mathbf{x}), allowing an identification of this function with a delta function, since the two have the same operational effect

\begin{aligned}\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.18)

Returning to the d’Alembertian Green’s function.

We need two additional computations to finish the job. The first is the gradient of the delta function

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ? \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ?\end{aligned}

Consider \boldsymbol{\nabla} f(g(\mathbf{x})). This is

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))&=e_\alpha \frac{\partial {f(g(\mathbf{x}))}}{\partial {x^\alpha}} \\ &=e_\alpha \frac{\partial {f}}{\partial {g}} \frac{\partial {g}}{\partial {x^\alpha}},\end{aligned}

so we have

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))=\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g.\end{aligned} \hspace{\stretch{1}}(2.19)

The Laplacian is similar

\begin{aligned}\Delta f(g)&= \boldsymbol{\nabla} \cdot \left(\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g \right) \\ &= \partial_\alpha \left(\frac{\partial {f}}{\partial {g}} \partial_\alpha g \right) \\ &= \left( \partial_\alpha \frac{\partial {f}}{\partial {g}} \right) \partial_\alpha g +\frac{\partial {f}}{\partial {g}} \partial_{\alpha\alpha} g  \\ &= \frac{\partial^2 {{f}}}{\partial {{g}}^2} \left( \partial_\alpha g \right) (\partial_\alpha g)+\frac{\partial {f}}{\partial {g}} \Delta g,\end{aligned}

so we have

\begin{aligned}\Delta f(g)= \frac{\partial^2 {{f}}}{\partial {{g}}^2} (\boldsymbol{\nabla} g)^2 +\frac{\partial {f}}{\partial {g}} \Delta g\end{aligned} \hspace{\stretch{1}}(2.20)

With g(\mathbf{x}) = {\left\lvert{\mathbf{x}}\right\rvert}, we’ll need the Laplacian of this vector magnitude

\begin{aligned}\Delta {\left\lvert{\mathbf{x}}\right\rvert}&=\partial_\alpha \frac{x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} + x_\alpha \partial_\alpha (x^\beta x^\beta)^{-1/2} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} - \frac{x_\alpha x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned}

So that we have

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &=\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned} \hspace{\stretch{1}}(2.21)

Now we have all the bits and pieces of 2.8 ready to assemble

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }&=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \\ &\quad +\frac{1}{{2\pi}} \left( - \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \right)\cdot-\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &\quad +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\left(\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \\ &=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2 }}\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \end{aligned}

Since we also have

\begin{aligned}\frac{1}{{c^2}} \partial_{tt}\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2}\end{aligned} \hspace{\stretch{1}}(2.23)

The \delta'' terms cancel out in the d’Alembertian, leaving just

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.24)

Noting that the spatial delta function is non-zero only when \mathbf{x} = 0, which means \delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c) = \delta(t) in this product, and we finally have

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta(t) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.25)

We write

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} },\end{aligned} \hspace{\stretch{1}}(2.26)

Elaborating on the wave equation Green’s function

The Green’s function 2.26 is a distribution that is non-zero only on the future lightcone. Observe that for t < 0 we have

\begin{aligned}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)&=\delta\left(-{\left\lvert{t}\right\rvert} - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \\ &= 0.\end{aligned}

We say that G is supported only on the future light cone. At \mathbf{x} = 0, only the contributions for t > 0 matter. Note that in the “old days”, Green’s functions used to be called influence functions, a name that works particularly well in this case. We have other Green’s functions for the d’Alembertian. The one above is called the retarded Green’s functions and we also have an advanced Green’s function. Writing + for advanced and - for retarded these are

\begin{aligned}G_{\pm} = \frac{\delta\left(t \pm \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.27)

There are also causal and non-causal variations that won’t be of interest for this course.

This arms us now to solve any problem in the Lorentz gauge

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' dt' \frac{\delta\left(t - t' - \frac{{\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}j^k(\mathbf{x}', t')+\text{An arbitrary collection of EM waves.}\end{aligned} \hspace{\stretch{1}}(3.28)

The additional EM waves are the possible contributions from the homogeneous equation.

Since \delta(t - t' - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c) is non-zero only when t' = t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c), the non-homogeneous parts of 3.28 reduce to

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' \frac{j^k(\mathbf{x}', t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(3.29)

Our potentials at time t and spatial position \mathbf{x} are completely specified in terms of the sums of the currents acting at the retarded time t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c. The field can only depend on the charge and current distribution in the past. Specifically, it can only depend on the charge and current distribution on the past light cone of the spacetime point at which we measure the field.

Example of the Green’s function. Consider a charged particle moving on a worldline

\begin{aligned}(c t, \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.30)

(c for classical)

For this particle

\begin{aligned}\rho(\mathbf{x}, t) &= e \delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ \mathbf{j}(\mathbf{x}, t) &= e \dot{\mathbf{x}}_c(t) \delta^3(\mathbf{x} - \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.31)

\begin{aligned}\begin{bmatrix}A^0(\mathbf{x}, t)\mathbf{A}(\mathbf{x}, t)\end{bmatrix}&=\frac{1}{{c}}\int d^3 \mathbf{x}' dt'\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\begin{bmatrix}c e \\ e \dot{\mathbf{x}}_c(t)\end{bmatrix}\delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ &=\int_{-\infty}^\infty\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c }{{\left\lvert{\mathbf{x}_c(t') - \mathbf{x}}\right\rvert}}\begin{bmatrix}e \\ e \frac{\dot{\mathbf{x}}_c(t)}{c}\end{bmatrix}\end{aligned}

PICTURE: light cones, and curved worldline. Pick an arbitrary point (\mathbf{x}_0, t_0), and draw the past light cone, looking at where this intersects with the trajectory

For the arbitrary point (\mathbf{x}_0, t_0) we see that this point and the retarded time (\mathbf{x}_c(t_r), t_r) obey the relation

\begin{aligned}c (t_0 - t_r) = {\left\lvert{\mathbf{x}_0 - \mathbf{x}_c(t_r)}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.33)

This retarded time is unique. There is only one such intersection.

Our job is to calculate

\begin{aligned}\int_{-\infty}^\infty \delta(f(x)) g(x) = \frac{g(x_{*})}{f'(x_{*})}\end{aligned} \hspace{\stretch{1}}(4.34)

where f(x_{*}) = 0.

\begin{aligned}f(t') = t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c\end{aligned} \hspace{\stretch{1}}(4.35)

\begin{aligned}\frac{\partial {f}}{\partial {t'}}&= -1 - \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \sqrt{ (\mathbf{x} - \mathbf{x}_c(t')) \cdot (\mathbf{x} - \mathbf{x}_c(t')) } \\ &= -1 + \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \frac{(\mathbf{x} - \mathbf{x}_c(t')) \cdot \mathbf{v}_c(t_r)}{{\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert}}\end{aligned}

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

Advertisements

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 17 (Taught by Prof. Erich Poppitz). Energy and momentum density. Starting a Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 8, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 6 material \S 31, and starting chapter 8 material from the text [1].

Covering lecture notes pp. 128-135: energy flux and momentum density of the EM wave (128-129); radiation pressure, its discovery and significance in physics (130-131); EM fields of moving charges: setting up the wave equation with a source (132-133); the convenience of Lorentz gauge in the study of radiation (134); reminder on Green’s functions from electrostatics (135) [Tuesday, Mar. 8]

Review. Energy density and Poynting vector.

Last time we showed that Maxwell’s equations imply

\begin{aligned}\frac{\partial }{\partial t} \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = -\mathbf{j} \cdot \mathbf{E} - \boldsymbol{\nabla} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.1)

In the lecture, Professor Poppitz said he was free here to use a full time derivative. When asked why, it was because he was considering \mathbf{E} and \mathbf{B} here to be functions of time only, since they were measured at a fixed point in space. This is really the same thing as using a time partial, so in these notes I’ll just be explicit and stick to using partials.

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\frac{\partial }{\partial {t}} \int_V \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = - \int_V \mathbf{j} \cdot \mathbf{E} - \int_{\partial_V} d^2 \boldsymbol{\sigma} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.3)

Any change in the energy must either due to currents, or energy escaping through the surface.

\begin{aligned}\mathcal{E} = \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} &= \mbox{Energy density of the EM field} \\ \mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} &= \mbox{Energy flux of the EM fields}\end{aligned} \hspace{\stretch{1}}(2.4)

The energy flux of the EM field: this is the energy flowing through d^2 \mathbf{A} in unit time (\mathbf{S} \cdot d^2 \mathbf{A}).

How about electromagnetic waves?

In a plane wave moving in direction \mathbf{k}.

PICTURE: \mathbf{E} \parallel \hat{\mathbf{z}}, \mathbf{B} \parallel \hat{\mathbf{x}}, \mathbf{k} \parallel \hat{\mathbf{y}}.

So, \mathbf{S} \parallel \mathbf{k} since \mathbf{E} \times \mathbf{B} \propto \mathbf{k}.

{\left\lvert{\mathbf{S}}\right\rvert} for a plane wave is the amount of energy through unit area perpendicular to \mathbf{k} in unit time.

Recall that we calculated

\begin{aligned}\mathbf{B} &= (\mathbf{k} \times \boldsymbol{\beta}) \sin(\omega t - \mathbf{k} \cdot \mathbf{x}) \\ \mathbf{E} &= \boldsymbol{\beta} {\left\lvert{\mathbf{k}}\right\rvert} \sin(\omega t - \mathbf{k} \cdot \mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.6)

Since we had \mathbf{k} \cdot \boldsymbol{\beta} = 0, we have {\left\lvert{\mathbf{E}}\right\rvert} = {\left\lvert{\mathbf{B}}\right\rvert}, and our Poynting vector follows nicely

\begin{aligned}\mathbf{S} &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} \frac{c}{4 \pi} \mathbf{E}^2  \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} c \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} e \mathcal{E}\end{aligned}

\begin{aligned}[\mathbf{S}] = \frac{\text{energy}}{\text{time latex \times$ area}} = \frac{\text{momentum \times speed}}{\text{time \times area}}\end{aligned} \hspace{\stretch{1}}(3.8)$

\begin{aligned}\left[\frac{\mathbf{S}}{c^2} \right] &= \frac{\text{momentum}}{\text{time latex \times$ area \times speed}} \\ &= \frac{\text{momentum}}{\text{area \times distance}} \\ &= \frac{\text{momentum}}{\text{volume}} \\ \end{aligned} $

So we wee that \mathbf{S}/c^2 is indeed rightly called “the momentum density” of the EM field.

We will later find that \mathcal{E} and \mathbf{S} are components of a rank-2 four tensor

\begin{aligned}T^{ij} = \begin{bmatrix}\mathcal{E} & \frac{S^1}{c^2} & \frac{S^2}{c^2} & \frac{S^3}{c^2} \\ \frac{S^1}{c^2} & & & \\ \frac{S^1}{c^2} & & \begin{bmatrix}\sigma^{\alpha\beta} \end{bmatrix}& \\ \frac{S^1}{c^2} & & & \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.9)

where \sigma^{\alpha\beta} is the stress tensor. We will get to all this in more detail later.

For EM wave we have

\begin{aligned}\mathbf{S} = \hat{\mathbf{k}} c \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.10)

(this is the energy flux)

\begin{aligned}\frac{\mathbf{S}}{c^2} = \hat{\mathbf{k}} \frac{\mathcal{E}}{c}\end{aligned} \hspace{\stretch{1}}(3.11)

(the momentum density of the wave).

\begin{aligned}c {\left\lvert{\frac{\mathbf{S}}{c^2}}\right\rvert} = \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.12)

(recall \mathcal{E} = c\mathcal{\mathbf{p}} for massless particles.

EM waves carry energy and momentum so when absorbed or reflected these are transferred to bodies.

Kepler speculated that this was the fact because he had observed that the tails of the comets were being pushed by the sunlight, since the tails faced away from the sun.

Maxwell also suggested that light would extort a force (presumably he wrote down the “Maxwell stress tensor” T^{ij} that is named after him).

This was actually measured later in 1901, by Peter Lebedev (Russia).

PICTURE: pole with flags in vacuum jar. Black (absorber) on one side, and Silver (reflector) on the other. Between the two of these, momentum conservation will introduce rotation (in the direction of the silver).

This is actually a tricky experiment and requires the vacuum, since the black surface warms up, and heats up the nearby gas molecules, which causes a rotation in the opposite direction due to just these thermal effects.

Another example (a factor) that prevents star collapse under gravitation is the radiation pressure of the light.

Moving on. Solving Maxwell’s equation

Our equations are

\begin{aligned}\epsilon^{i j k l} \partial_j F_{k l} &= 0 \\ \partial_i F^{i k} &= \frac{4 \pi}{c} j^k,\end{aligned} \hspace{\stretch{1}}(4.13)

where we assume that j^k(\mathbf{x}, t) is a given. Our task is to find F^{i k}, the (\mathbf{E}, \mathbf{B}) fields.

Proceed by finding A^i. First, as usual when F_{i j} = \partial_i A_j - \partial_j A_i. The Bianchi identity is satisfied so we focus on the current equation.

In terms of potentials

\begin{aligned}\partial_i (\partial^i A^k - \partial^k A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.15)

or

\begin{aligned}\partial_i \partial^i A^k - \partial^k (\partial_i A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.16)

We want to work in the Lorentz gauge \partial_i A^i = 0. This is justified by the simplicity of the remaining problem

\begin{aligned}\partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.17)

Write

\begin{aligned}\partial_i \partial^i = \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} - \Delta = \square\end{aligned} \hspace{\stretch{1}}(4.18)

where

\begin{aligned}\Delta = \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2}\end{aligned} \hspace{\stretch{1}}(4.19)

This \square is the d’Alembert operator (“d’Alembertian”).

Our equation is

\begin{aligned}\square A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.20)

(in the Lorentz gauge)

If we learn how to solve (**), then we’ve learned all.

Method: Green’s function’s

In electrostatics where j^0 = 0, A^0 \ne 0 only, we have

\begin{aligned}\Delta A^0 = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(4.21)

Solution

\begin{aligned}\Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') = \delta^3( \mathbf{x} - \mathbf{x}')\end{aligned} \hspace{\stretch{1}}(4.22)

PICTURE:

\begin{aligned}\rho(\mathbf{x}') d^3 \mathbf{x}'\end{aligned} \hspace{\stretch{1}}(4.23)

(a small box)

acting through distance {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, acting at point \mathbf{x}. With G(\mathbf{x}, \mathbf{x}') = 1/4 \pi{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, we have

\begin{aligned}\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \int d^3 \mathbf{x}' \delta^3( \mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \rho(\mathbf{x})\end{aligned}

Also since G is deemed a linear operator, we have \Delta_\mathbf{x} G = G \Delta_\mathbf{x}, we find

\begin{aligned}\rho(\mathbf{x})&=\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') 4 \pi \rho(\mathbf{x}') \\ &=\int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}} \rho(\mathbf{x}').\end{aligned}

We end up finding that

\begin{aligned}\phi(\mathbf{x}) = \int \frac{\rho(\mathbf{x}')}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(4.24)

thus solving the problem. We wish next to do this for the Maxwell equation 4.20.

The Green’s function method is effective, but I can’t help but consider it somewhat of a cheat, since one has to through higher powers know what the Green’s function is. In the electrostatics case, at least we can work from the potential function and take it’s Laplacian to find that this is equivalent (thus implictly solving for the Green’s function at the same time). It will be interesting to see how we do this for the forced d’Alembertian equation.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 10 notes. Hydrogen atom.

Posted by peeterjoot on November 23, 2010

[Click here for a PDF of this post with nicer formatting]

Introduce the center of mass coordinates.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

\begin{aligned}\left(-\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2-\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2\right)\bar{u}(\mathbf{r}_1, \mathbf{r}_2) + V(\mathbf{r}_1, \mathbf{r}_2)\bar{u}(\mathbf{r}_1, \mathbf{r}_2)=E \bar{u}(\mathbf{r}_1, \mathbf{r}_2).\end{aligned} \hspace{\stretch{1}}(6.123)

Here \left( -\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2 -\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2 \right) is the total kinetic energy term.
For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on \mathbf{r}_1 - \mathbf{r}_2. We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

\begin{aligned}\mathbf{R} &= \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{ m_1 + m_2 } \\ \mathbf{r} &= \mathbf{r}_1 - \mathbf{r}_2.\end{aligned} \hspace{\stretch{1}}(6.124)

The notation \boldsymbol{\nabla}_k^2 represents the Laplacian for the positions of the k’th particle, so that if \mathbf{r}_1 = (x_1, x_2, x_3) is the position of the first particle, the Laplacian for this is:

\begin{aligned}\boldsymbol{\nabla}_1^2=\frac{\partial^2}{\partial x_1^2}+\frac{\partial^2}{\partial y_1^2}+\frac{\partial^2}{\partial z_1^2}\end{aligned} \hspace{\stretch{1}}(6.126)

Here \mathbf{R} is the center of mass coordinate, and \mathbf{r} is the relative coordinate. With this transformation we can reduce the problem to a single coordinate PDE.

We set \bar{u}(\mathbf{r}_1, \mathbf{r}_2) = u(\mathbf{r}) U(\mathbf{R}) and E = E_{rel} + E_{cm}, and get

\begin{aligned}-\frac{\hbar^2}{2\mu} {\boldsymbol{\nabla}_{\mathbf{r}}}^2 u(\mathbf{r}) + V(\mathbf{r}) u(\mathbf{r}) = E_{rel} u(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(6.127)

and

\begin{aligned}-\frac{\hbar^2}{2M} {\boldsymbol{\nabla}_{\mathbf{R}}}^2 U(\mathbf{R}) = E_{cm} U(\mathbf{R})\end{aligned} \hspace{\stretch{1}}(6.128)

where M = m_1 + m_2 is the total mass, and \mu = m_1 m_2/M is the reduced mass.

Aside: WHY do we care (slide of Hydrogen line spectrum shown)? This all started because when people looked at the spectrum for the hydrogen atom, a continuous spectrum was not found. Instead what was found was quantized frequencies. All this abstract Hilbert space notation with its bras and kets is a way of representing observable phenomina.

Also note that we have the same sort of problems in electrodynamics and mechanics, so we are able to recycle this sort of work, either applying it in those problems later, or using those techniques here.

In Electromagnetism these are the problems involving the solution to

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(6.129)

or for

\begin{aligned}\mathbf{E} = - \boldsymbol{\nabla} \Phi\end{aligned} \hspace{\stretch{1}}(6.130)

\begin{aligned}\boldsymbol{\nabla}^2 \Phi = 0,\end{aligned} \hspace{\stretch{1}}(6.131)

where \mathbf{E} is the electric field and \Phi is the electric potential.

We need sol solve 6.127 for u(\mathbf{r}). In spherical coordinates

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{r}} \frac{d^2}{dr^2} ( r R_l ) + \left( V(\mathbf{r}) + \frac{\hbar^2 }{2m} l(l+1) \right) R_l = E R_l\end{aligned} \hspace{\stretch{1}}(6.132)

where

\begin{aligned}u(\mathbf{r}) = R_l(\mathbf{r}) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.133)

This all follows by the separation of variables technique that we’ll use here, in E and M, in PDEs, and so forth.

FIXME: picture drawn. Theta measured down from \mathbf{e}_3 axis to the position \mathbf{r} and \phi measured in the x,y plane measured in the \mathbf{e}_1 to \mathbf{e}_2 orientation.

For the hydrogen atom, we have

\begin{aligned}V(\mathbf{r}) = - \frac{Z e^2}{r}\end{aligned} \hspace{\stretch{1}}(6.134)

Here is what this looks like.

We introduce

\begin{aligned}\rho &= \alpha r \\ \alpha &= \sqrt{\frac{-8 m E}{\hbar^2}} \\ \lambda &= \frac{2 m Z e^2}{\hbar^2 \alpha} \\ \frac{2 m (- E) }{\hbar^2 \alpha^2 } &= \frac{1}{{4}}\end{aligned} \hspace{\stretch{1}}(6.135)

and write

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} + \left( \frac{\lambda}{\rho} - \frac{l(l+1)}{\rho^2} - \frac{1}{{4}} \right) R_l = 0\end{aligned} \hspace{\stretch{1}}(6.139)

Large \rho limit.

For \rho \rightarrow \infty, 6.139 becomes

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{1}{{4}} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.140)

which implies solutions of the form

\begin{aligned}R_l(\rho) = e^{\pm \rho/2}\end{aligned} \hspace{\stretch{1}}(6.141)

but keep R_l(\rho) = e^{-\rho/2} and note that R_l(\rho) = F(\rho)e^{-\rho/2} is also a solution in the limit of \rho \rightarrow \infty, where F(\rho) is a polynomial.

Let F(\rho) = \rho^s L(\rho) where L(\rho) = a_0 + a_1 \rho + \cdots a_\nu \rho^\nu + \cdots.

Small \rho limit.

We also want to consider the small \rho limit, and piece together the information that we find. Think about the following. The small \rho \rightarrow 0 or r \rightarrow 0 limit gives

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.142)

\paragraph{Question:} Is this correct?

Not always. Also: we will also think about the l=0 case later (where \lambda/\rho would probably need to be retained.)

We need:

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.143)

Instead of using 6.142 as in the text, we must substuitute R_l = \rho^s into the above to find

\begin{aligned}s(s-1) \rho^{s-2} + 2 s \rho^{s-2} - l(l+1) \rho^{s-2} &= 0 \\ \left( s(s-1) + 2 s - l(l+1) \right) \rho^{s-2} &= \end{aligned} \hspace{\stretch{1}}(6.144)

for this equality for all \rho we need

\begin{aligned}s(s-1) + 2 s - l(l+1) = 0\end{aligned} \hspace{\stretch{1}}(6.146)

Solutions s = l and s = -(l+1) can be found to this, and we need s positive for normalizability, which implies

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2}.\end{aligned} \hspace{\stretch{1}}(6.147)

Now we need to find what restrictions we must have on L(\rho). Recall that we have L(\rho) = \sum a_\nu \rho^\nu. Substutition into 6.142 gives

\begin{aligned}\rho \frac{d^2 L}{d\rho} + \left( 2(l+1) - \rho \right) \frac{d L}{d \rho} + (\lambda - l - 1) L = 0\end{aligned} \hspace{\stretch{1}}(6.148)

We get

\begin{aligned}A_0 + A_1 \rho + \cdots A_\nu \rho^\nu + \cdots = 0\end{aligned} \hspace{\stretch{1}}(6.149)

For this to be valid for all \rho,

\begin{aligned}a_{\nu+1} \left( (\nu+1)(\nu+ 2l + 2)\right)-a_{\nu} \left( \nu - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.150)

or

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{ \nu - \lambda + l + 1 }{ (\nu+1)(\nu+ 2l + 2) }\end{aligned} \hspace{\stretch{1}}(6.151)

For large \nu we have

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{1}{{\nu+1}}\rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.152)

Recall that for the exponential Taylor series we have

\begin{aligned}e^\rho = 1 + \rho + \frac{\rho^2}{2!} + \cdots\end{aligned} \hspace{\stretch{1}}(6.153)

for which we have

\begin{aligned}\frac{a_{\nu+1}}{a_\nu} \rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.154)

L(\rho) is behaving like e^\rho, and if we had that

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2} \rightarrow \rho^l e^\rho e^{-\rho/2} = \rho^l e^{\rho/2}\end{aligned} \hspace{\stretch{1}}(6.155)

This is divergent, so for normalizable solutions we require L(\rho) to be a polynomial of a finite number of terms.

The polynomial L(\rho) must stop at \nu = n', and we must have

\begin{aligned}a_{\nu+1} = a_{n' +1} = 0\end{aligned} \hspace{\stretch{1}}(6.156)

\begin{aligned}a_{n'} \ne 0\end{aligned} \hspace{\stretch{1}}(6.157)

From 6.150 we have

\begin{aligned}a_{n'} \left( n' - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.158)

so we require

\begin{aligned}n' = \lambda - l - 1\end{aligned} \hspace{\stretch{1}}(6.159)

Let \lambda = n, an integer and n' = 0, 1, 2, \cdots so that n' + l + 1 = n says for n= 1,2, \cdots

\begin{aligned}l \le n-1\end{aligned} \hspace{\stretch{1}}(6.160)

If

\begin{aligned}\lambda = n = \frac{2 m Z e^2 }{\hbar^2 \alpha}\end{aligned} \hspace{\stretch{1}}(6.161)

we have

\begin{aligned}E = E_n = - \frac{Z^2 e^2 }{2 a_0} \frac{1}{{n^2}}\end{aligned} \hspace{\stretch{1}}(6.162)

where a_0 = \hbar^2/m e^2 is the Bohr radius, and \alpha = \sqrt{-8 m E/\hbar^2}. In the lecture m was originally used for the reduced mass. I’ve switched to \mu earlier so that this cannot be mixed up with this use of m for the azimuthal quantum number associated with L_z Y_{lm} = m \hbar Y_{lm}.

PICTURE ON BOARD. Energy level transitions on 1/n^2 graph with differences between n=2 to n=1 shown, and photon emitted as a result of the n=2 to n=1 transition.

From Chapter 4 and the story of the spherical harmonics, for a given l, the quantum number m varies between -l and l in integer steps. The radial part of the solution of this separtion of variables problem becomes

\begin{aligned}R_l = \rho^l L(\rho) e^{-\rho/2}\end{aligned} \hspace{\stretch{1}}(6.163)

where the functions L(\rho) are the Laguerre polynomials, and our complete wavefunction is

\begin{aligned}u_{nlm}(r, \theta, \phi) = R_l(\rho) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.164)

\begin{aligned}n &= 1, 2, \cdots \\ l &= 0, 1, 2, \cdots, n-1 \\ m &= -l, -l+1, \cdots 0, 1, 2, \cdots, l-1, l\end{aligned} \hspace{\stretch{1}}(6.165)

Note that for n=1, l=0, R_{10} \propto e^{-r/a_0}, as graphed here.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Notes and problems for Desai chapter IV.

Posted by peeterjoot on October 12, 2010

[Click here for a PDF of this post with nicer formatting]

Notes.

Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\begin{itemize}
\item Antisymmetric tensor summation identity.

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab} = \delta_{ja} \delta_{kb} - \delta_{jb}\delta_{ka}\end{aligned} \hspace{\stretch{1}}(1.1)

This is obviously the coordinate equivalent of the dot product of two bivectors

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b) &=( (\mathbf{e}_j \wedge \mathbf{e}_k) \cdot \mathbf{e}_a ) \cdot \mathbf{e}_b) =\delta_{ka}\delta_{jb} - \delta_{ja}\delta_{kb}\end{aligned} \hspace{\stretch{1}}(1.2)

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b)&= \sum_{ie} \left\langle{{\epsilon_{ijk} \mathbf{e}_j \mathbf{e}_k \epsilon_{eab} \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{(\mathbf{e}_i \mathbf{e}_i) \mathbf{e}_j \mathbf{e}_k (\mathbf{e}_e \mathbf{e}_e) \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{\mathbf{e}_i \mathbf{e}_e I^2}}\right\rangle \\ &=-\sum_{ie} \epsilon_{ijk} \epsilon_{eab} \delta_{ie} \\ &=-\sum_i\epsilon_{ijk} \epsilon_{iab}\qquad\square\end{aligned}

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi(L_{-} Y_{lm})^\dagger L_{-} Y_{lm} \sin\theta\end{aligned}

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi L_{-}^{*} Y_{lm}^{*}L_{-} Y_{lm} \sin\theta\end{aligned}

How does he end up with the L_{-} and the Y_{lm}^{*} interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators L_{\pm}, where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the L_{-} commutation with \mathbf{L}^2 implies this?

\end{itemize}

Problems

Problem 1.

Statement.

Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

Cartesian case.

For the Cartesian coordinates case we have

\begin{aligned}H = -\frac{\hbar^2}{2m} (\partial_{xx} + \partial_{yy}) = i \hbar \partial_t\end{aligned} \hspace{\stretch{1}}(2.3)

Application of separation of variables with \Psi = XYT gives

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{X''}{X} +\frac{Y''}{Y} \right) = i \hbar \frac{T'}{T} = E .\end{aligned} \hspace{\stretch{1}}(2.4)

Immediately, we have the time dependence

\begin{aligned}T \propto e^{-i E t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.5)

with the PDE reduced to

\begin{aligned}\frac{X''}{X} +\frac{Y''}{Y} = - \frac{2m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

Introducing separate independent constants

\begin{aligned}\frac{X''}{X} &= a^2 \\ \frac{Y''}{Y} &= b^2 \end{aligned} \hspace{\stretch{1}}(2.7)

provides the pre-normalized wave function and the constraints on the constants

\begin{aligned}\Psi &= C e^{ax}e^{by}e^{-iE t/\hbar} \\ a^2 + b^2 &= -\frac{2 m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.9)

Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

\begin{aligned}e^{ax} &= e^{a(x + \lambda_x)} \\ e^{ay} &= e^{a(y + \lambda_y)} ,\end{aligned} \hspace{\stretch{1}}(2.11)

or

\begin{aligned}a\lambda_x &= 2 \pi i m \\ a\lambda_y &= 2 \pi i n.\end{aligned} \hspace{\stretch{1}}(2.13)

This provides a more explicit form for the energy expression

\begin{aligned}E_{mn} &= \frac{1}{{2m}} 4 \pi^2 \hbar^2 \left( \frac{m^2}{{\lambda_x}^2}+\frac{n^2}{{\lambda_y}^2}\right).\end{aligned} \hspace{\stretch{1}}(2.15)

We can also add in the area normalization using

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{x=0}^{\lambda_x} dx\int_{y=0}^{\lambda_x} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.16)

Our eigenfunctions are now completely specified

\begin{aligned}u_{mn}(x,y,t) &= \frac{1}{{\sqrt{\lambda_x \lambda_y}}}e^{2 \pi i x/\lambda_x}e^{2 \pi i y/\lambda_y}e^{-iE t/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.17)

The interesting thing about this solution is that we can make arbitrary linear combinations

\begin{aligned}f(x,y) = a_{mn} u_{mn}\end{aligned} \hspace{\stretch{1}}(2.18)

and then “solve” for a_{mn}, for an arbitrary f(x,y) by taking inner products

\begin{aligned}a_{mn} = \left\langle{{u_mn}} \vert {{f}}\right\rangle =\int_{x=0}^{\lambda_x} dx \int_{y=0}^{\lambda_x} dy f(x,y) u_mn^{*}(x,y).\end{aligned} \hspace{\stretch{1}}(2.19)

This gives the appearance that any function f(x,y) is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions f(x,y), but the equality really means that the RHS will be the periodic extension of f(x,y).

Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

\begin{aligned}\frac{2 \pi m }{\lambda_x} &= k_x \\ \frac{2 \pi n }{\lambda_y} &= k_y \end{aligned} \hspace{\stretch{1}}(2.20)

Our inner product is now

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^{\infty} dx\int_{\infty}^{\infty} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.22)

And the corresponding normalized wavefunction and associated energy constant E are

\begin{aligned}u_{\mathbf{k}}(x,y,t) &= \frac{1}{{2\pi}}e^{i k_x x}e^{i k_y y}e^{-iE t/\hbar} = \frac{1}{{2\pi}}e^{i \mathbf{k} \cdot \mathbf{x}}e^{-iE t/\hbar} \\ E &= \frac{\hbar^2 \mathbf{k}^2 }{2m}\end{aligned} \hspace{\stretch{1}}(2.23)

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

Polar case.

In polar coordinates our gradient is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta.\end{aligned} \hspace{\stretch{1}}(2.25)

with

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} .\end{aligned} \hspace{\stretch{1}}(2.26)

Squaring the gradient for the Laplacian we’ll need the partials, which are

\begin{aligned}\partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= -\hat{\mathbf{r}}.\end{aligned}

The Laplacian is therefore

\begin{aligned}\boldsymbol{\nabla}^2 &= (\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \cdot(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \hat{\mathbf{r}} \partial_r \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\mathbf{r}}) \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\boldsymbol{\theta}}) \frac{1}{{r}} \partial_\theta .\end{aligned}

Evalating the derivatives we have

\begin{aligned}\boldsymbol{\nabla}^2 = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{r^2} \partial_{\theta\theta},\end{aligned} \hspace{\stretch{1}}(2.28)

and are now prepared to move on to the solution of the Hamiltonian H = -(\hbar^2/2m) \boldsymbol{\nabla}^2. With separation of variables again using \Psi = R(r) \Theta(\theta) T(t) we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{R''}{R} + \frac{R'}{rR} + \frac{1}{{r^2}} \frac{\Theta''}{\Theta} \right) = i \hbar \frac{T'}{T} = E.\end{aligned} \hspace{\stretch{1}}(2.29)

Rearranging to separate the \Theta term we have

\begin{aligned}\frac{r^2 R''}{R} + \frac{r R'}{R} + \frac{2 m E}{\hbar^2} r^2 E = -\frac{\Theta''}{\Theta} = \lambda^2.\end{aligned} \hspace{\stretch{1}}(2.30)

The angular solutions are given by

\begin{aligned}\Theta = \frac{1}{{\sqrt{2\pi}}} e^{i \lambda \theta}\end{aligned} \hspace{\stretch{1}}(2.31)

Where the normalization is given by

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{0}^{2 \pi} d\theta \psi^{*}(\theta) \phi(\theta).\end{aligned} \hspace{\stretch{1}}(2.32)

And the radial by the solution of the PDE

\begin{aligned}r^2 R'' + r R' + \left( \frac{2 m E}{\hbar^2} r^2 E - \lambda^2 \right) R = 0\end{aligned} \hspace{\stretch{1}}(2.33)

Problem 2.

Statement.

Use the orthogonality property of P_l(\cos\theta)

\begin{aligned}\int_{-1}^1 dx P_l(x) P_{l'}(x) = \frac{2}{2l+1} \delta_{l l'},\end{aligned} \hspace{\stretch{1}}(2.34)

confirm that at least the first two terms of (4.171)

\begin{aligned}e^{i k r \cos\theta} = \sum_{l=0}^\infty (2l + 1) i^l j_l(kr) P_l(\cos\theta)\end{aligned} \hspace{\stretch{1}}(2.35)

are correct.

Solution.

Taking the inner product using the integral of 2.34 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} P_l'(x) = 2 i^l j_l(kr) \end{aligned} \hspace{\stretch{1}}(2.36)

To confirm the first two terms we need

\begin{aligned}P_0(x) &= 1 \\ P_1(x) &= x \\ j_0(\rho) &= \frac{\sin\rho}{\rho} \\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} - \frac{\cos\rho}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.37)

On the LHS for l'=0 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = 2 \frac{\sin{kr}}{kr}\end{aligned} \hspace{\stretch{1}}(2.41)

On the LHS for l'=1 note that

\begin{aligned}\int dx x e^{i k r x} &= \int dx x \frac{d}{dx} \frac{e^{i k r x}}{ikr} \\ &= x \frac{e^{i k r x}}{ikr} - \frac{e^{i k r x}}{(ikr)^2}.\end{aligned}

So, integration in [-1,1] gives us

\begin{aligned}\int_{-1}^1 dx e^{i k r x} =  -2i \frac{\cos{kr}}{kr} + 2i \frac{1}{{(kr)^2}} \sin{kr}.\end{aligned} \hspace{\stretch{1}}(2.42)

Now compare to the RHS for l'=0, which is

\begin{aligned}2 j_0(kr) = 2 \frac{\sin{kr}}{kr},\end{aligned} \hspace{\stretch{1}}(2.43)

which matches 2.41. For l'=1 we have

\begin{aligned}2 i j_1(kr) = 2i \frac{1}{{kr}} \left( \frac{\sin{kr}}{kr} - \cos{kr} \right),\end{aligned} \hspace{\stretch{1}}(2.44)

which in turn matches 2.42, completing the exersize.

Problem 3.

Statement.

Obtain the commutation relations \left[{L_i},{L_j}\right] by calculating the vector \mathbf{L} \times \mathbf{L} using the definition \mathbf{L} = \mathbf{r} \times \mathbf{p} directly instead of introducing a differential operator.

Solution.

Expressing the product \mathbf{L} \times \mathbf{L} in determinant form sheds some light on this question. That is

\begin{aligned}\begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\  L_1 & L_2 & L_3 \\  L_1 & L_2 & L_3\end{vmatrix}&= \mathbf{e}_1 \left[{L_2},{L_3}\right] +\mathbf{e}_2 \left[{L_3},{L_1}\right] +\mathbf{e}_3 \left[{L_1},{L_2}\right]= \mathbf{e}_i \epsilon_{ijk} \left[{L_j},{L_k}\right]\end{aligned} \hspace{\stretch{1}}(2.45)

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using L_i = \epsilon_{ijk} r_j p_k like so

\begin{aligned}\left[{L_i},{L_j}\right]&=\epsilon_{imn} r_m p_n \epsilon_{jab} r_a p_b- \epsilon_{jab} r_a p_b \epsilon_{imn} r_m p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (p_n r_a) p_b- \epsilon_{jab} \epsilon_{imn} r_a (p_b r_m) p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (r_a p_n -i \hbar \delta_{an}) p_b- \epsilon_{jab} \epsilon_{imn} r_a (r_m p_b - i \hbar \delta{mb}) p_n \\ &=\epsilon_{imn} \epsilon_{jab} (r_m r_a p_n p_b - r_a r_m p_b p_n )- i \hbar ( \epsilon_{imn} \epsilon_{jnb} r_m p_b - \epsilon_{jam} \epsilon_{imn} r_a p_n ).\end{aligned}

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

\begin{aligned}\left[{L_i},{L_j}\right]&=i \hbar ( \epsilon_{nim} \epsilon_{njb} r_m p_b - \epsilon_{mja} \epsilon_{min} r_a p_n ) \\ &=i \hbar ( (\delta_{ij} \delta_{mb} -\delta_{ib} \delta_{mj}) r_m p_b - (\delta_{ji} \delta_{an} -\delta_{jn} \delta_{ai}) r_a p_n ) \\ &=i \hbar (\delta_{ij} \delta_{mb} r_m p_b - \delta_{ji} \delta_{an} r_a p_n - \delta_{ib} \delta_{mj} r_m p_b + \delta_{jn} \delta_{ai} r_a p_n ) \\ &=i \hbar (\delta_{ij} r_m p_m- \delta_{ji} r_a p_a- r_j p_i+ r_i p_j ) \\ \end{aligned}

For k \ne i,j, this is i\hbar (\mathbf{r} \times \mathbf{p})_k, so we can write

\begin{aligned}\mathbf{L} \times \mathbf{L} &= i\hbar \mathbf{e}_k \epsilon_{kij} ( r_i p_j - r_j p_i ) = i\hbar \mathbf{L} = i\hbar \mathbf{e}_k L_k = i\hbar \mathbf{L}.\end{aligned} \hspace{\stretch{1}}(2.46)

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

\begin{aligned}\left[{L_i},{L_j}\right] &= i \hbar \epsilon_{ijk} L_k\end{aligned} \hspace{\stretch{1}}(2.47)

as here in Desai. Both say the same thing.

Problem 4.

Statement.

Solution.

TODO.

Problem 5.

Statement.

A free particle is moving along a path of radius R. Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.

Solution.

In classical mechanics our Lagrangian for this system is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m R^2 \dot{\theta}^2,\end{aligned} \hspace{\stretch{1}}(2.48)

with the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m R^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(2.49)

Thus the classical Hamiltonian is

\begin{aligned}H = \frac{1}{{2m R^2}} {p_\theta}^2.\end{aligned} \hspace{\stretch{1}}(2.50)

By analogy the QM Hamiltonian operator will therefore be

\begin{aligned}H = -\frac{\hbar^2}{2m R^2} \partial_{\theta\theta}.\end{aligned} \hspace{\stretch{1}}(2.51)

For \Psi = \Theta(\theta) T(t), separation of variables gives us

\begin{aligned}-\frac{\hbar^2}{2m R^2} \frac{\Theta''}{\Theta} = i \hbar \frac{T'}{T} = E,\end{aligned} \hspace{\stretch{1}}(2.52)

from which we have

\begin{aligned}T &\propto e^{-i E t/\hbar} \\ \Theta &\propto e^{ \pm i \sqrt{2m E} R \theta/\hbar }.\end{aligned} \hspace{\stretch{1}}(2.53)

Requiring single valued \Theta, equal at any multiples of 2\pi, we have

\begin{aligned}e^{ \pm i \sqrt{2m E} R (\theta + 2\pi)/\hbar } = e^{ \pm i \sqrt{2m E} R \theta/\hbar },\end{aligned}

or

\begin{aligned}\pm \sqrt{2m E} \frac{R}{\hbar} 2\pi = 2 \pi n,\end{aligned}

Suffixing the energy values with this index we have

\begin{aligned}E_n = \frac{n^2 \hbar^2}{2 m R^2}.\end{aligned} \hspace{\stretch{1}}(2.55)

Allowing both positive and negative integer values for n we have

\begin{aligned}\Psi = \frac{1}{{\sqrt{2\pi}}} e^{i n \theta} e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.56)

where the normalization was a result of the use of a [0,2\pi] inner product over the angles

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle \equiv \int_0^{2\pi} \psi^{*}(\theta) \phi(\theta) d\theta.\end{aligned} \hspace{\stretch{1}}(2.57)

Problem 6.

Statement.

Determine \left[{L_i},{r}\right] and \left[{L_i},{\mathbf{r}}\right].

Solution.

Since L_i contain only \theta and \phi partials, \left[{L_i},{r}\right] = 0. For the position vector, however, we have an angular dependence, and are left to evaluate \left[{L_i},{\mathbf{r}}\right] = r \left[{L_i},{\hat{\mathbf{r}}}\right]. We’ll need the partials for \hat{\mathbf{r}}. We have

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \phi} \\ I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(2.58)

Evaluating the partials we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} = \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}}\end{aligned}

With

\begin{aligned}\hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R \\ \hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R\end{aligned} \hspace{\stretch{1}}(2.61)

where \tilde{R} R = 1, and \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \hat{\mathbf{r}} = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3, we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 R = \tilde{R} \mathbf{e}_1 R = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.64)

For the \phi partial we have

\begin{aligned}\partial_\phi \hat{\mathbf{r}}&= \mathbf{e}_3 \sin\theta I \hat{\boldsymbol{\phi}} \mathbf{e}_1 \mathbf{e}_2 \\ &= \sin\theta \hat{\boldsymbol{\phi}}\end{aligned}

We are now prepared to evaluate the commutators. Starting with the easiest we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right] \Psi&=-i \hbar (\partial_\phi \hat{\mathbf{r}} \Psi - \hat{\mathbf{r}} \partial_\phi \Psi ) \\ &=-i \hbar (\partial_\phi \hat{\mathbf{r}}) \Psi  \\ \end{aligned}

So we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right]&=-i \hbar \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.65)

Observe that by virtue of chain rule, only the action of the partials on \hat{\mathbf{r}} itself contributes, and all the partials applied to \Psi cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of L_x, and L_y are

\begin{aligned}L_x &= -i \hbar (-S_\phi \partial_\theta - C_\phi \cot\theta \partial_\phi) \\ L_y &= -i \hbar (C_\phi \partial_\theta - S_\phi \cot\theta \partial_\phi),\end{aligned} \hspace{\stretch{1}}(2.66)

where the sines and cosines are written with S, and C respectively for short.

We therefore have

\begin{aligned}\left[{L_x},{\hat{\mathbf{r}}}\right]&= -i \hbar (-S_\phi (\partial_\theta \hat{\mathbf{r}}) - C_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}}) ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi \cot\theta S_\theta \hat{\boldsymbol{\phi}} ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi C_\theta \hat{\boldsymbol{\phi}} ) \\ \end{aligned}

and

\begin{aligned}\left[{L_y},{\hat{\mathbf{r}}}\right]&= -i \hbar (C_\phi (\partial_\theta \hat{\mathbf{r}}) - S_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}})) \\ &= -i \hbar (C_\phi \hat{\boldsymbol{\theta}} - S_\phi C_\theta \hat{\boldsymbol{\phi}} ).\end{aligned}

Adding back in the factor of r, and summarizing we have

\begin{aligned}\left[{L_i},{r}\right] &= 0 \\ \left[{L_x},{\mathbf{r}}\right] &= -i \hbar r (-\sin\phi \hat{\boldsymbol{\theta}} - \cos\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_y},{\mathbf{r}}\right] &= -i \hbar r (\cos\phi \hat{\boldsymbol{\theta}} - \sin\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_z},{\mathbf{r}}\right] &= -i \hbar r \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.68)

Problem 7.

Statement.

Show that

\begin{aligned}e^{-i\pi L_x /\hbar } {\lvert {l,m} \rangle} = {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.72)

Solution.

TODO.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »

Derivation of the spherical polar Laplacian

Posted by peeterjoot on October 9, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

In [1] was a Geometric Algebra derivation of the 2D polar Laplacian by squaring the quadient. In [2] was a factorization of the spherical polar unit vectors in a tidy compact form. Here both these ideas are utilized to derive the spherical polar form for the Laplacian, an operation that is strictly algebraic (squaring the gradient) provided we operate on the unit vectors correctly.

Our rotation multivector.

Our starting point is a pair of rotations. We rotate first in the x,y plane by \phi

\begin{aligned}\mathbf{x} &\rightarrow \mathbf{x}' = \tilde{R_\phi} \mathbf{x} R_\phi \\ i &\equiv \mathbf{e}_1 \mathbf{e}_2 \\ R_\phi &= e^{i \phi/2}\end{aligned} \hspace{\stretch{1}}(2.1)

Then apply a rotation in the \mathbf{e}_3 \wedge (\tilde{R_\phi} \mathbf{e}_1 R_\phi) = \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi plane

\begin{aligned}\mathbf{x}' &\rightarrow \mathbf{x}'' = \tilde{R_\theta} \mathbf{x}' R_\theta \\ R_\theta &= e^{ \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi \theta/2 } = \tilde{R_\phi} e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } R_\phi\end{aligned} \hspace{\stretch{1}}(2.4)

The composition of rotations now gives us

\begin{aligned}\mathbf{x}&\rightarrow \mathbf{x}'' = \tilde{R_\theta} \tilde{R_\phi} \mathbf{x} R_\phi R_\theta = \tilde{R} \mathbf{x} R \\ R &= R_\phi R_\theta = e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 }.\end{aligned}

Expressions for the unit vectors.

The unit vectors in the rotated frame can now be calculated. With I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 we can calculate

\begin{aligned}\hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R  \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R  \\ \hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R\end{aligned} \hspace{\stretch{1}}(3.6)

Performing these we get

\begin{aligned}\hat{\boldsymbol{\phi}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_2 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_2 e^{ i \phi },\end{aligned}

and

\begin{aligned}\hat{\mathbf{r}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_3 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } (\mathbf{e}_3 \cos\theta + \mathbf{e}_1 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_3 \cos\theta +\mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= \mathbf{e}_3 (\cos\theta + \mathbf{e}_3 \mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } ) \\ &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta},\end{aligned}

and

\begin{aligned}\hat{\boldsymbol{\theta}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_1 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } ( \mathbf{e}_1 \cos\theta - \mathbf{e}_3 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_1 \cos\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} \cos\theta - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} (\cos\theta + \hat{\boldsymbol{\phi}} i \mathbf{e}_3 \sin\theta ) \\ &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned}

Summarizing these are

\begin{aligned}\hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{ i \phi } \\ \hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\theta}} &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned} \hspace{\stretch{1}}(3.9)

Derivatives of the unit vectors.

We’ll need the partials. Most of these can be computed from 3.9 by inspection, and are

\begin{aligned}\partial_r \hat{\boldsymbol{\phi}} &= 0 \\ \partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\boldsymbol{\phi}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \\ \partial_\phi \hat{\boldsymbol{\phi}} &= \hat{\boldsymbol{\phi}} i \\ \partial_\phi \hat{\mathbf{r}} &= \hat{\boldsymbol{\phi}} \sin\theta \\ \partial_\phi \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} \cos\theta\end{aligned} \hspace{\stretch{1}}(4.12)

Expanding the Laplacian.

We note that the line element is ds = dr + r d\theta + r\sin\theta d\phi, so our gradient in spherical coordinates is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi.\end{aligned} \hspace{\stretch{1}}(5.21)

We can now evaluate the Laplacian

\begin{aligned}\boldsymbol{\nabla}^2 &=\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right).\end{aligned} \hspace{\stretch{1}}(5.22)

Evaluating these one set at a time we have

\begin{aligned}\hat{\mathbf{r}} \partial_r \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) &= \partial_{rr},\end{aligned}

and

\begin{aligned}\frac{1}{{r}} \hat{\boldsymbol{\theta}} \partial_\theta \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right)&=\frac{1}{{r}} \left\langle{{\hat{\boldsymbol{\theta}} \left(\hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \partial_r + \hat{\mathbf{r}} \partial_{\theta r}+ \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{1}{{r}} \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \partial_\theta+ \hat{\boldsymbol{\phi}} \partial_\theta \frac{1}{{r\sin\theta}} \partial_\phi\right)}}\right\rangle \\ &= \frac{1}{{r}} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta},\end{aligned}

and

\begin{aligned}\frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi &\cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \\ &=\frac{1}{r\sin\theta} \left\langle{{\hat{\boldsymbol{\phi}}\left(\hat{\boldsymbol{\phi}} \sin\theta \partial_r + \hat{\mathbf{r}} \partial_{\phi r} + \hat{\boldsymbol{\phi}} \cos\theta \frac{1}{r} \partial_\theta + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\phi \theta }+ \hat{\boldsymbol{\phi}} i \frac{1}{r\sin\theta} \partial_\phi + \hat{\boldsymbol{\phi}} \frac{1}{r\sin\theta} \partial_{\phi \phi }\right)}}\right\rangle \\ &=\frac{1}{{r}} \partial_r+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned}

Summing these we have

\begin{aligned}\boldsymbol{\nabla}^2 &=\partial_{rr}+ \frac{2}{r} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta}+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned} \hspace{\stretch{1}}(5.23)

This is often written with a chain rule trick to considate the r and \theta partials

\begin{aligned}\boldsymbol{\nabla}^2 \Psi &=\frac{1}{{r}} \partial_{rr} (r \Psi)+ \frac{1}{{r^2 \sin\theta}} \partial_\theta \left( \sin\theta \partial_\theta \Psi \right)+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\psi\psi} \Psi\end{aligned} \hspace{\stretch{1}}(5.24)

It’s simple to verify that this is identical to 5.23.

References

[1] Peeter Joot. Polar form for the gradient and Laplacian. [online]. http://sites.google.com/site/peeterjoot/math2009/polarGradAndLaplacian.pdf.

[2] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf .

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

On Professor Dmitrevsky’s “the only valid Laplacian definition is the divergence of gradient”.

Posted by peeterjoot on December 2, 2009

[Click here for a PDF of this post with nicer formatting]

Dedication.

To all tyrannical old Professors driven to cruelty by an unending barrage of increasingly ill prepared students.

Motivation.

The text [1] has an excellent general derivation of a number of forms of the gradient, divergence, curl and Laplacian.

This is actually done, not starting with the usual Cartesian forms, but more general definitions.

\begin{aligned}(\text{grad}\  \phi)_i &= \lim_{ds_i \rightarrow 0} \frac{\phi(q_i + dq_i) - \phi(q_i)}{ds_i} \\ \text{div}\  \mathbf{V} &= \lim_{\Delta \tau \rightarrow 0} \frac{1}{{\Delta \tau}} \int_\sigma \mathbf{V} \cdot d\boldsymbol{\sigma} \\ (\text{curl}\  \mathbf{V}) \cdot \mathbf{n} &= \lim_{\Delta \sigma \rightarrow 0} \frac{1}{{\Delta \sigma}} \oint_\lambda \mathbf{V} \cdot d\boldsymbol{\lambda} \\ \text{Laplacian}\  \phi &= \text{div} (\text{grad}\ \phi).\end{aligned} \quad\quad\quad(1)

These are then shown to imply the usual Cartesian definitions, plus provide the means to calculate the general relationships in whatever coordinate system you like. All in all one can’t beat this approach, and I’m not going to try to replicate it, because I can’t improve it in any way by doing so.

Given that, what do I have to say on this topic? Well, way way back in first year electricity and magnetism, my dictator of a prof, the intimidating but diminutive Dmitrevsky, yelled at us repeatedly that one cannot just dot the gradient to form the Laplacian. As far as he was concerned one can only say

\begin{aligned}\text{Laplacian}\  \phi &= \text{div} (\text{grad}\ \phi),\end{aligned} \quad\quad\quad(5)

and never never never, the busted way

\begin{aligned}\text{Laplacian}\  \phi &= (\boldsymbol{\nabla} \cdot \boldsymbol{\nabla}) \phi.\end{aligned} \quad\quad\quad(6)

Because “this only works in Cartesian coordinates”. He probably backed up this assertion with a heartwarming and encouraging statement like “back in the days when University of Toronto was a real school you would have learned this in kindergarten”.

This detail is actually something that has bugged me ever since, because my assumption was that, provided one was careful, why would a change to an alternate coordinate system matter? The gradient is still the gradient, so it seems to me that this ought to be a general way to calculate things.

Here we explore the validity of the dictatorial comments of Prof Dmitrevsky. The key to reconciling intuition and his statement turns out to lie with the fact that one has to let the gradient operate on the unit vectors in the non Cartesian representation as well as the partials, something that wasn’t clear as a first year student. Provided that this is done, the plain old dot product procedure yields the expected results.

This exploration will utilize a two dimensional space as a starting point, transforming from Cartesian to polar form representation. I’ll also utilize a geometric algebra representation of the polar unit vectors.

The gradient in polar form.

Lets start off with a calculation of the gradient in polar form starting with the Cartesian form. Writing \partial_x = {\partial {}}/{\partial {x}}, \partial_y = {\partial {}}/{\partial {y}}, \partial_r = {\partial {}}/{\partial {r}}, and \partial_\theta = {\partial {}}/{\partial {\theta}}, we want to map

\begin{aligned}\boldsymbol{\nabla} = \mathbf{e}_1 \partial_1 + \mathbf{e}_2 \partial_2= \begin{bmatrix}\mathbf{e}_1 & \mathbf{e}_2 \end{bmatrix}\begin{bmatrix}\partial_1 \\ \partial_2 \end{bmatrix},\end{aligned} \quad\quad\quad(7)

into the same form using \hat{\mathbf{r}}, \hat{\boldsymbol{\theta}}, \partial_r, and \partial_\theta. With i = \mathbf{e}_1 \mathbf{e}_2 we have

\begin{aligned}\begin{bmatrix}\mathbf{e}_1 \\ \mathbf{e}_2\end{bmatrix}=e^{i\theta}\begin{bmatrix}\hat{\mathbf{r}} \\ \hat{\boldsymbol{\theta}}\end{bmatrix}.\end{aligned} \quad\quad\quad(8)

Next we need to do a chain rule expansion of the partial operators to change variables. In matrix form that is

\begin{aligned}\begin{bmatrix}\frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y}} \end{bmatrix}= \begin{bmatrix}\frac{\partial {r}}{\partial {x}} &          \frac{\partial {\theta}}{\partial {x}} \\ \frac{\partial {r}}{\partial {y}} &          \frac{\partial {\theta}}{\partial {y}} \end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(9)

To calculate these partials we drop back to coordinates

\begin{aligned}x^2 + y^2 &= r^2 \\ \frac{y}{x} &= \tan\theta \\ \frac{x}{y} &= \cot\theta.\end{aligned} \quad\quad\quad(10)

From this we calculate

\begin{aligned}\frac{\partial {r}}{\partial {x}} &= \cos\theta \\ \frac{\partial {r}}{\partial {y}} &= \sin\theta \\  \frac{1}{{r\cos\theta}} &= \frac{\partial {\theta}}{\partial {y}} \frac{1}{{\cos^2\theta}} \\ \frac{1}{{r\sin\theta}} &= -\frac{\partial {\theta}}{\partial {x}} \frac{1}{{\sin^2\theta}},\end{aligned} \quad\quad\quad(13)

for

\begin{aligned}\begin{bmatrix}\frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y}} \end{bmatrix}= \begin{bmatrix}\cos\theta & -\sin\theta/r \\ \sin\theta & \cos\theta/r\end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(17)

We can now write down the gradient in polar form, prior to final simplification

\begin{aligned}\boldsymbol{\nabla} = e^{i\theta}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}\cos\theta & -\sin\theta/r \\ \sin\theta & \cos\theta/r\end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(18)

Observe that we can factor a unit vector

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}=\hat{\mathbf{r}}\begin{bmatrix}1 & i\end{bmatrix}=\begin{bmatrix}i & 1\end{bmatrix}\hat{\boldsymbol{\theta}}\end{aligned} \quad\quad\quad(19)

so the 1,1 element of the matrix product in the interior is

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}\cos\theta \\ \sin\theta \end{bmatrix}=\hat{\mathbf{r}} e^{i\theta} = e^{-i\theta}\hat{\mathbf{r}}.\end{aligned} \quad\quad\quad(20)

Similarly, the 1,2 element of the matrix product in the interior is

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}-\sin\theta/r \\ \cos\theta/r\end{bmatrix}=\frac{1}{{r}} e^{-i\theta} \hat{\boldsymbol{\theta}}.\end{aligned} \quad\quad\quad(21)

The exponentials cancel nicely, leaving after a final multiplication with the polar form for the gradient

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta\end{aligned} \quad\quad\quad(22)

That was a fun way to get the result, although we could have just looked it up. We want to use this now to calculate the Laplacian.

Polar form Laplacian for the plane.

We are now ready to look at the Laplacian. First let’s do it the first year electricity and magnetism course way. We look up the formula for polar form divergence, the one we were supposed to have memorized in kindergarten, and find it to be

\begin{aligned}\text{div}\ \mathbf{A} = \partial_r A_r + \frac{1}{{r}} A_r + \frac{1}{{r}} \partial_\theta A_\theta\end{aligned} \quad\quad\quad(23)

We can now apply this to the gradient vector in polar form which has components \boldsymbol{\nabla}_r = \partial_r, and \boldsymbol{\nabla}_\theta = (1/r)\partial_\theta, and get

\begin{aligned}\text{div}\ \text{grad} = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_{\theta\theta}\end{aligned} \quad\quad\quad(24)

This is the expected result, and what we should get by performing \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} in polar form. Now, let’s do it the wrong way, dotting our gradient with itself.

\begin{aligned}\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} &= \left(\partial_r, \frac{1}{{r}} \partial_\theta\right) \cdot \left(\partial_r, \frac{1}{{r}} \partial_\theta\right) \\ &= \partial_{rr} + \frac{1}{{r}} \partial_\theta \left(\frac{1}{{r}} \partial_\theta\right) \\ &= \partial_{rr} + \frac{1}{{r^2}} \partial_{\theta\theta}\end{aligned}

This is wrong! So is Dmitrevsky right that this procedure is flawed, or do you spot the mistake? I have also cruelly written this out in a way that obscures the error and highlights the source of the confusion.

The problem is that our unit vectors are functions, and they must also be included in the application of our partials. Using the coordinate polar form without explicitly putting in the unit vectors is how we go wrong. Here’s the right way

\begin{aligned}\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} &=\left( \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \cdot \left( \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \\ &=\hat{\mathbf{r}} \cdot \partial_r \left(\hat{\mathbf{r}} \partial_r \right)+\hat{\mathbf{r}} \cdot \partial_r \left( \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right)+\hat{\boldsymbol{\theta}} \cdot \frac{1}{{r}} \partial_\theta \left( \hat{\mathbf{r}} \partial_r \right)+\hat{\boldsymbol{\theta}} \cdot \frac{1}{{r}} \partial_\theta \left( \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \\ \end{aligned}

Now we need the derivatives of our unit vectors. The \partial_r derivatives are zero since these have no radial dependence, but we do have \theta partials

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &=\partial_\theta \left( \mathbf{e}_1 e^{i\theta} \right) \\ &=\mathbf{e}_1 \mathbf{e}_1 \mathbf{e}_2 e^{i\theta} \\ &=\mathbf{e}_2 e^{i\theta} \\ &=\hat{\boldsymbol{\theta}},\end{aligned}

and

\begin{aligned}\partial_\theta \hat{\boldsymbol{\theta}} &=\partial_\theta \left( \mathbf{e}_2 e^{i\theta} \right) \\ &=\mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 e^{i\theta} \\ &=-\mathbf{e}_1 e^{i\theta} \\ &=-\hat{\mathbf{r}}.\end{aligned}

(One should be able to get the same results if these unit vectors were written out in full as \hat{\mathbf{r}} = \mathbf{e}_1 \cos\theta + \mathbf{e}_2 \sin\theta, and \hat{\boldsymbol{\theta}} = \mathbf{e}_2 \cos\theta - \mathbf{e}_1 \sin\theta, instead of using the obscure geometric algebra quaterionic rotation exponential operators.)

Having calculated these partials we now have

\begin{aligned}(\boldsymbol{\nabla} \cdot \boldsymbol{\nabla}) =\partial_{rr} +\frac{1}{{r}} \partial_r +\frac{1}{{r^2}} \partial_{\theta\theta} \end{aligned} \quad\quad\quad(25)

Exactly what it should be, and what we got with the coordinate form of the divergence operator when applying the “Laplacian equals the divergence of the gradient” rule blindly. We see that the expectation that \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} is the Laplacian in more than the Cartesian coordinate system is not invalid, but that care is required to apply the chain rule to all functions. We also see that expressing a vector in coordinate form when the basis vectors are position dependent is also a path to danger.

Is this anything that our electricity and magnetism prof didn’t know? Unlikely. Is this something that our prof felt that could not be explained to a mob of first year students? Probably.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | 1 Comment »

Two particle center of mass Laplacian change of variables.

Posted by peeterjoot on November 30, 2009

[Click here for a PDF of this post with nicer formatting]

Exercise 15.2 in [1] is to do a center of mass change of variables for the two particle Hamiltonian

\begin{aligned}H = - \frac{\hbar^2}{2 m_1} {\boldsymbol{\nabla}_1}^2- \frac{\hbar^2}{2 m_2} {\boldsymbol{\nabla}_2}^2+ V(\mathbf{r}_1 -\mathbf{r}_2).\end{aligned} \quad\quad\quad(1)

Before trying this, I was surprised that this would result in a diagonal form for the transformed Hamiltonian, so it is well worth doing the problem to see why this is the case. He uses

\begin{aligned}\boldsymbol{\xi} &= \mathbf{r}_1 - \mathbf{r}_2 \\ \boldsymbol{\eta} &= \frac{1}{{M}}( m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2 ).\end{aligned} \quad\quad\quad(2)

Lets use coordinates {x_k}^{(1)} for \mathbf{r}_1, and {x_k}^{(2)} for \mathbf{r}_2. Expanding the first order partial operator for {\partial {}}/{\partial {{x_1}^{(1)}}} by chain rule in terms of \boldsymbol{\eta}, and \boldsymbol{\xi} coordinates we have

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(1)}}}&=\frac{\partial {\eta_k}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {\eta_k}}+\frac{\partial {\xi_k}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {\xi_k}} \\ &=\frac{m_1}{M} \frac{\partial {}}{\partial {\eta_1}}+ \frac{\partial {}}{\partial {\xi_1}}.\end{aligned}

We also have

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(2)}}}&=\frac{\partial {\eta_k}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {\eta_k}}+\frac{\partial {\xi_k}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {\xi_k}} \\ &=\frac{m_2}{M} \frac{\partial {}}{\partial {\eta_1}}- \frac{\partial {}}{\partial {\xi_1}}.\end{aligned}

The second partials for these x coordinates are not a diagonal quadratic second partial operator, but are instead

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {x_1^{(1)}}}&=\frac{(m_1)^2}{M^2} \frac{\partial^2}{\partial \eta_1 \partial \eta_1}{}+\frac{\partial^2}{\partial \xi_1 \partial \xi_1}{}+2 \frac{m_1}{M} \frac{\partial^2}{\partial \xi_1 \partial \eta_1}{} \\ \frac{\partial {}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {x_1^{(2)}}}&=\frac{(m_2)^2}{M^2} \frac{\partial^2}{\partial \eta_1 \partial \eta_1}{}+\frac{\partial^2}{\partial \xi_1 \partial \xi_1}{}-2 \frac{m_2}{M} \frac{\partial^2}{\partial \xi_1 \partial \eta_1}{}.\end{aligned} \quad\quad\quad(4)

The desired result follows directly, since the mixed partial terms conveniently cancel when we sum (1/m_1) {\partial {}}/{\partial {x_1^{(1)}}} {\partial {}}/{\partial {x_1^{(1)}}} +(1/m_2) {\partial {}}/{\partial {x_1^{(2)}}} {\partial {}}/{\partial {x_1^{(2)}}}. This leaves us with

\begin{aligned}H = \frac{-\hbar^2}{2} \sum_{k=1}^3 \left( \frac{1}{{M}} \frac{\partial^2}{\partial \eta_k \partial \eta_k}{}+ \left( \frac{1}{{m_1}} + \frac{1}{{m_2}} \right) \frac{\partial^2}{\partial \xi_k \partial \xi_k}{}\right)+ V(\boldsymbol{\xi}),\end{aligned} \quad\quad\quad(6)

With the shorthand of the text

\begin{aligned}\boldsymbol{\nabla}_{\boldsymbol{\eta}} &= \sum_k \frac{\partial^2}{\partial \eta_k \partial \eta_k}{} \\ \boldsymbol{\nabla}_{\boldsymbol{\xi}} &= \sum_k \frac{\partial^2}{\partial \xi_k \partial \xi_k}{},\end{aligned} \quad\quad\quad(7)

this is the result to be proven.

References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

A summary: Bivector form of quantum angular momentum operator

Posted by peeterjoot on September 6, 2009

[Click here for a PDF of this sequence of posts with nicer formatting]

Having covered a fairly wide range in the previous Geometric Algebra exploration of the angular momentum operator, it seems worthwhile to attempt to summarize what was learned.

The exploration started with a simple observation that the use of the spatial pseudoscalar for the imaginary of the angular momentum operator in its coordinate form

\begin{aligned}L_1 &= -i \hbar( x_2 \partial_3 - x_3 \partial_2 ) \\ L_2 &= -i \hbar( x_3 \partial_1 - x_1 \partial_3 ) \\ L_3 &= -i \hbar( x_1 \partial_2 - x_2 \partial_1 )  \end{aligned} \quad\quad\quad(69)

allowed for expressing the angular momentum operator in its entirety as a bivector valued operator

\begin{aligned}\mathbf{L} &= - \hbar \mathbf{x} \wedge \boldsymbol{\nabla} \end{aligned} \quad\quad\quad(72)

The bivector representation has an intrinsic complex behavior, eliminating the requirement for an explicitly imaginary i as is the case in the coordinate representation.

It was then assumed that the Laplacian can be expressed directly in terms of \mathbf{x} \wedge \boldsymbol{\nabla}. This isn’t an unreasonable thought since we can factor the gradient with components projected onto and perpendicular to a constant reference vector \hat{\mathbf{a}} as

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{a}} (\hat{\mathbf{a}} \cdot \boldsymbol{\nabla}) + \hat{\mathbf{a}} (\hat{\mathbf{a}} \wedge \boldsymbol{\nabla}) \end{aligned} \quad\quad\quad(73)

and this squares to a Laplacian expressed in terms of these constant reference directions

\begin{aligned}\boldsymbol{\nabla}^2 = (\hat{\mathbf{a}} \cdot \boldsymbol{\nabla})^2 - (\hat{\mathbf{a}} \cdot \boldsymbol{\nabla})^2 \end{aligned} \quad\quad\quad(74)

a quantity that has an angular momentum like operator with respect to a constant direction. It was then assumed that we could find an operator representation of the form

\begin{aligned}\boldsymbol{\nabla}^2 = \frac{1}{{\mathbf{x}^2}} \left( (\mathbf{x} \cdot \boldsymbol{\nabla})^2 - \left\langle{{(\mathbf{x} \cdot \boldsymbol{\nabla})^2}}\right\rangle + f(\mathbf{x}, \boldsymbol{\nabla}) \right) \end{aligned} \quad\quad\quad(75)

Where f(\mathbf{x}, \boldsymbol{\nabla}) was to be determined, and was found by subtraction. Thinking ahead to relativistic applications this result was obtained for the n-dimensional Laplacian and was found to be

\begin{aligned}\nabla^2 &= \frac{1}{{x^2}} \left( (n-2 + x \cdot \nabla) (x \cdot \nabla) - \left\langle{{(x \wedge \nabla)^2}}\right\rangle \right) \end{aligned} \quad\quad\quad(76)

For the 3D case specifically this is

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} \left( (1 + \mathbf{x} \cdot \boldsymbol{\nabla}) (\mathbf{x} \cdot \boldsymbol{\nabla}) - \left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle \right) \end{aligned} \quad\quad\quad(77)

While the scalar selection above is good for some purposes, it interferes with observations about simultaneous eigenfunctions for the angular momentum operator and the scalar part of its square as seen in the Laplacian. With some difficulty and tedium, by subtracting the bivector and quadvector grades from the squared angular momentum operator (x \wedge \nabla)^2 it was eventually found that (76) can be written as

\begin{aligned}\nabla^2 &= \frac{1}{{x^2}} \left(   (n-2 + x \cdot \nabla) (x \cdot \nabla) + (n-2 - x \wedge \nabla) (x \wedge \nabla) + (x \wedge \nabla) \wedge (x \wedge \nabla) \right) \end{aligned} \quad\quad\quad(78)

In the 3D case the quadvector vanishes and (77) with the scalar selection removed is reduced to

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} \left( (1 + \mathbf{x} \cdot \boldsymbol{\nabla}) (\mathbf{x} \cdot \boldsymbol{\nabla}) + (1 - \mathbf{x} \wedge \boldsymbol{\nabla}) (\mathbf{x} \wedge \boldsymbol{\nabla}) \right) \end{aligned} \quad\quad\quad(79)

In 3D we also have the option of using the duality relation between the cross and the wedge \mathbf{a} \wedge \mathbf{b} = i (\mathbf{a} \times \mathbf{b}) to express the Laplacian

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} \left( (1 + \mathbf{x} \cdot \boldsymbol{\nabla}) (\mathbf{x} \cdot \boldsymbol{\nabla}) + (1 - i (\mathbf{x} \times \boldsymbol{\nabla})) i(\mathbf{x} \times \boldsymbol{\nabla}) \right) \end{aligned} \quad\quad\quad(80)

Since it is customary to express angular momentum as \mathbf{L} = -i \hbar (\mathbf{x} \times \boldsymbol{\nabla}), we see here that the imaginary in this context should perhaps necessarily be viewed as the spatial pseudoscalar. It was that guess that led down this path, and we come full circle back to this considering how to factor the Laplacian in vector quantities. Curiously this factorization is in no way specific to Quantum Theory.

A few verifications of the Laplacian in (80) were made. First it was shown that the directional derivative terms containing \mathbf{x} \cdot \boldsymbol{\nabla}, are equivalent to the radial terms of the Laplacian in spherical polar coordinates. That is

\begin{aligned}\frac{1}{{\mathbf{x}^2}} (1 + \mathbf{x} \cdot \boldsymbol{\nabla}) (\mathbf{x} \cdot \boldsymbol{\nabla}) \psi &= \frac{1}{{r}} \frac{\partial^2}{\partial r^2} (r\psi)  \end{aligned} \quad\quad\quad(81)

Employing the quaternion operator for the spherical polar rotation

\begin{aligned}R &= e^{\mathbf{e}_{31}\theta/2} e^{\mathbf{e}_{12}\phi/2} \\ \mathbf{x} &= r \tilde{R} \mathbf{e}_3 R  \end{aligned} \quad\quad\quad(82)

it was also shown that there was explicitly no radial dependence in the angular momentum operator which takes the form

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} &= \tilde{R} \left( \mathbf{e}_3 \mathbf{e}_1 R \partial_\theta + \mathbf{e}_3 \mathbf{e}_2 R \frac{1}{{\sin\theta}} \partial_\phi \right) \\ &= \hat{\mathbf{r}} \left( \hat{\boldsymbol{\theta}} \partial_\theta + \hat{\boldsymbol{\phi}} \frac{1}{{\sin\theta}} \partial_\phi \right)  \end{aligned} \quad\quad\quad(84)

Because there is a \theta, and \phi dependence in the unit vectors \hat{\mathbf{r}}, \hat{\boldsymbol{\theta}}, and \hat{\boldsymbol{\phi}}, squaring the angular momentum operator in this form means that the unit vectors are also operated on. Those vectors were given by the triplet

\begin{aligned}\begin{pmatrix}\hat{\mathbf{r}} \\ \hat{\boldsymbol{\theta}} \\ \hat{\boldsymbol{\phi}} \\ \end{pmatrix}&=\tilde{R}\begin{pmatrix}\mathbf{e}_3 \\ \mathbf{e}_1 \\ \mathbf{e}_2 \\ \end{pmatrix}R \end{aligned} \quad\quad\quad(86)

Using I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 for the spatial pseudoscalar, and i = \mathbf{e}_1 \mathbf{e}_2 (a possibly confusing switch of notation) for the bivector of the x-y plane we can write the spherical polar unit vectors in exponential form as

\begin{aligned}\begin{pmatrix}\hat{\boldsymbol{\phi}} \\ \hat{\mathbf{r}} \\ \hat{\boldsymbol{\theta}} \\ \end{pmatrix}&=\begin{pmatrix}\mathbf{e}_2 e^{i\phi} \\ \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta} \\ \end{pmatrix} \end{aligned} \quad\quad\quad(87)

These or related expansions were used to verify (with some difficulty) that the scalar squared bivector operator is identical to the expected scalar spherical polar coordinates parts of the Laplacian

\begin{aligned}-\left\langle{{ (\mathbf{x} \wedge \boldsymbol{\nabla})^2 }}\right\rangle =\frac{1}{{\sin\theta}} \frac{\partial {}}{\partial {\theta}} \sin\theta \frac{\partial {}}{\partial {\theta}} + \frac{1}{{\sin^2\theta}} \frac{\partial^2}{\partial \phi^2} \end{aligned} \quad\quad\quad(88)

Additionally, by left or right dividing a unit bivector from the angular momentum operator, we are able to find that the raising and lowering operators are left as one of the factors

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} &= \mathbf{e}_{31} \left( e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) + \mathbf{e}_{13} \frac{1}{{i}} \partial_\phi \right) \\ \mathbf{x} \wedge \boldsymbol{\nabla} &= \left( e^{-i\phi} (\partial_\theta - i \cot\theta \partial_\phi) - \mathbf{e}_{13} \frac{1}{{i}} \partial_\phi \right) \mathbf{e}_{31} \end{aligned} \quad\quad\quad(89)

Both of these use i = \mathbf{e}_1 \mathbf{e}_2, the bivector for the plane, and not the spatial pseudoscalar. We are then able to see that in the context of the raising and lowering operator for the radial equation the interpretation of the imaginary should be one of a plane.

Using the raising operator factorization, it was calculated that (\sin\theta)^\lambda e^{i\lambda \phi} was an eigenfunction of the bivector operator \mathbf{x} \wedge \boldsymbol{\nabla} with eigenvalue -\lambda. This results in the simultaneous eigenvalue of \lambda(\lambda + 1) for this eigenfunction with the scalar squared angular momentum operator.

There are a few things here that have not been explored to their logical conclusion.

The bivector Fourier projections I \mathbf{e}_k (\mathbf{x} \wedge \boldsymbol{\nabla} ) \cdot (-I \mathbf{e}_k) do not obey the commutation relations of the scalar angular momentum components, so an attempt to directly use these to construct raising and lowering operators does not produce anything useful. The raising and lowering operators in a form that could be used to find eigensolutions were found by factoring out \mathbf{e}_{13} from the bivector operator. Making this particular factorization was a fluke and only because it was desirable to express the bivector operator entirely in spherical polar form. It is curious that this results in raising and lowering operators for the x,y plane, and understanding this further would be nice.

In the eigen solutions for the bivector operator, no quantization condition was imposed. I don’t understand the argument that Bohm used to do so in the traditional treatment, and revisiting this once that is done is in order.

I am also unsure exactly how Bohm knows that the inner product for the eigenfunctions should be a surface integral. This choice works, but what drives it. Can that be related to anything here?

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

Bivector grades of the squared angular momentum operator.

Posted by peeterjoot on September 6, 2009

[Click here for a PDF of this sequence of posts with nicer formatting]

Motivation

The aim here is to extract the bivector grades of the squared angular momentum operator

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2} \stackrel{?}{=} \cdots \end{aligned} \quad\quad\quad(1)

I’d tried this before and believe gotten it wrong. Take it super slow and dumb and careful.

Non-operator expansion.

Suppose P is a bivector, P = (\gamma^k \wedge \gamma^m) P_{km}, the grade two product with a different unit bivector is

\begin{aligned}{\left\langle{{ (\gamma_a \wedge \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} &= {\left\langle{{ (\gamma_a \gamma_b - \gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ \gamma_a (\gamma_b \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} + {\left\langle{{ \gamma_a (\gamma_b \wedge (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &= (\gamma_a \wedge \gamma^m) P_{b m} -(\gamma_a \wedge \gamma^k) P_{k b} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &+ (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} - (\gamma_b \wedge \gamma^m) P_{a m} + (\gamma_b \wedge \gamma^k) P_{k a} \\ &= (\gamma_a \wedge \gamma^c) (P_{b c} -P_{c b})+ (\gamma_b \wedge \gamma^c) (P_{c a} -P_{a c} ) \end{aligned}

This same procedure will be used for the operator square, but we have the complexity of having the second angular momentum operator change the first bivector result.

Operator expansion.

In the first few lines of the bivector product expansion above, a blind replacement \gamma_a \rightarrow x, and \gamma_b \rightarrow \nabla gives us

\begin{aligned}{\left\langle{{ (x \wedge \nabla) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} &= {\left\langle{{ (x \nabla - x \cdot \nabla) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ x (\nabla \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} + {\left\langle{{ x (\nabla \wedge (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} - (x \cdot \nabla) (\gamma^k \wedge \gamma^m) P_{km} \end{aligned}

Using P_{km} = x_k \partial_m, eliminating the coordinate expansion we have an intermediate result that gets us partway to the desired result

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&={\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} + {\left\langle{{ x (\nabla \wedge (x \wedge \nabla)) }}\right\rangle}_{2} - (x \cdot \nabla) (x \wedge \nabla)  \end{aligned} \quad\quad\quad(2)

An expansion of the first term should be easier than the second. Dropping back to coordinates we have

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &={\left\langle{{ x (\nabla \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} x_k \partial_m \\ &={\left\langle{{ x (\gamma_a \partial^a \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} x_k \partial_m \\ &={\left\langle{{ x \gamma^m \partial^k }}\right\rangle}_{2} x_k \partial_m -{\left\langle{{ x \gamma^k \partial^m }}\right\rangle}_{2} x_k \partial_m  \\ &=x \wedge (\partial^k x_k \gamma^m \partial_m )- x \wedge (\partial^m \gamma^k x_k \partial_m ) \end{aligned}

Okay, a bit closer. Backpedaling with the reinsertion of the complete vector quantities we have

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &= x \wedge (\partial^k x_k \nabla ) - x \wedge (\partial^m x \partial_m )  \end{aligned} \quad\quad\quad(3)

Expanding out these two will be conceptually easier if the functional operation is made explicit. For the first

\begin{aligned}x \wedge (\partial^k x_k \nabla ) \phi&=x \wedge x_k \partial^k (\nabla \phi)+x \wedge ((\partial^k x_k) \nabla) \phi \\ &=x \wedge ((x \cdot \nabla) (\nabla \phi))+ n (x \wedge \nabla) \phi \end{aligned}

In operator form this is

\begin{aligned}x \wedge (\partial^k x_k \nabla ) &= n (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla )  \end{aligned} \quad\quad\quad(4)

Now consider the second half of (3). For that we expand

\begin{aligned}x \wedge (\partial^m x \partial_m ) \phi&=x \wedge (x \partial_m \partial^m \phi)+ x \wedge ((\partial^m x) \partial_m \phi) \end{aligned}

Since x \wedge x = 0, and \partial^m x = \partial^m x_k \gamma^k = \gamma^m, we have

\begin{aligned}x \wedge (\partial^m x \partial_m ) \phi&=x \wedge (\gamma^m \partial_m ) \phi \\ &=(x \wedge \nabla) \phi \end{aligned}

Putting things back together we have for (3)

\begin{aligned}{\left\langle{{ x (\nabla \cdot (x \wedge \nabla)) }}\right\rangle}_{2} &= (n-1) (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla )  \end{aligned} \quad\quad\quad(5)

This now completes a fair amount of the bivector selection, and a substitution back into (2) yields

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&=(n-1 - x \cdot \nabla) (x \wedge \nabla) + x \wedge ((x \cdot \nabla) \nabla ) + x \cdot (\nabla \wedge (x \wedge \nabla))  \end{aligned} \quad\quad\quad(6)

The remaining task is to explicitly expand the last vector-trivector dot product. To do that we use the basic alternation expansion identity

\begin{aligned}a \cdot (b \wedge c \wedge d)&= (a \cdot b) (c \wedge d)-(a \cdot c) (b \wedge d)+(a \cdot d) (b \wedge c) \end{aligned} \quad\quad\quad(7)

To see how to apply this to the operator case lets write that explicitly but temporarily in coordinates

\begin{aligned}x \cdot ((\nabla \wedge (x \wedge \nabla)) \phi&=(x^\mu \gamma_\mu) \cdot ((\gamma^\nu \partial_\nu ) \wedge (x_\alpha \gamma^\alpha \wedge (\gamma^\beta \partial_\beta))) \phi \\ &=x \cdot \nabla (x \wedge \nabla) \phi-x \cdot \gamma^\alpha \nabla \wedge x_\alpha \nabla \phi+x^\mu \nabla \wedge x \gamma_\mu \cdot \gamma^\beta \partial_\beta  \phi \\ &=x \cdot \nabla (x \wedge \nabla) \phi-x^\alpha \nabla \wedge x_\alpha \nabla \phi+x^\mu \nabla \wedge x \partial_\mu  \phi \end{aligned}

Considering this term by term starting with the second one we have

\begin{aligned}x^\alpha \nabla \wedge x_\alpha \nabla \phi&=x_\alpha (\gamma^\mu \partial_\mu) \wedge x^\alpha \nabla \phi \\ &=x_\alpha \gamma^\mu \wedge (\partial_\mu x^\alpha) \nabla \phi +x_\alpha \gamma^\mu \wedge x^\alpha \partial_\mu \nabla \phi  \\ &=x_\mu \gamma^\mu \wedge \nabla \phi +x_\alpha x^\alpha \gamma^\mu \wedge \partial_\mu \nabla \phi  \\ &=x \wedge \nabla \phi +x^2 \nabla \wedge \nabla \phi \end{aligned}

The curl of a gradient is zero, since summing over an product of antisymmetric and symmetric indexes \gamma^\mu \wedge \gamma^\nu \partial_{\mu\nu} is zero. Only one term remains to evaluate in the vector-trivector dot product now

\begin{aligned}x \cdot (\nabla \wedge x \wedge \nabla) &=(-1 + x \cdot \nabla )(x \wedge \nabla) +x^\mu \nabla \wedge x \partial_\mu   \end{aligned} \quad\quad\quad(8)

Again, a completely dumb and brute force expansion of this is

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi&=x^\mu (\gamma^\nu \partial_\nu) \wedge (x^\alpha \gamma_\alpha) \partial_\mu \phi \\ &=x^\mu \gamma^\nu \wedge (\partial_\nu (x^\alpha \gamma_\alpha)) \partial_\mu \phi +x^\mu \gamma^\nu \wedge (x^\alpha \gamma_\alpha) \partial_\nu \partial_\mu \phi \\ &=x^\mu (\gamma^\alpha \wedge \gamma_\alpha) \partial_\mu \phi +x^\mu \gamma^\nu \wedge x \partial_\nu \partial_\mu \phi \end{aligned}

With \gamma^\mu = \pm \gamma_\mu, the wedge in the first term is zero, leaving

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi&=-x^\mu x \wedge \gamma^\nu \partial_\nu \partial_\mu \phi \\ &=-x^\mu x \wedge \gamma^\nu \partial_\mu \partial_\nu \phi \\ &=-x \wedge x^\mu \partial_\mu \gamma^\nu \partial_\nu \phi \end{aligned}

In vector form we have finally

\begin{aligned}x^\mu \nabla \wedge x \partial_\mu \phi &= -x \wedge (x \cdot \nabla) \nabla \phi  \end{aligned} \quad\quad\quad(9)

The final expansion of the vector-trivector dot product is now

\begin{aligned}x \cdot (\nabla \wedge x \wedge \nabla) &=(-1 + x \cdot \nabla )(x \wedge \nabla) -x \wedge (x \cdot \nabla) \nabla \phi  \end{aligned} \quad\quad\quad(10)

This was the last piece we needed for the bivector grade selection. Incorporating this into (6), both the x \cdot \nabla x \wedge \nabla, and the x \wedge (x \cdot \nabla) \nabla terms cancel leaving the surprising simple result

\begin{aligned}{\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2}&=(n-2) (x \wedge \nabla)  \end{aligned} \quad\quad\quad(11)

The power of this result is that it allows us to write the scalar angular momentum operator from the Laplacian as

\begin{aligned}\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle &= (x \wedge \nabla)^2 - {\left\langle{{ (x \wedge \nabla)^2 }}\right\rangle}_{2} - (x \wedge \nabla) \wedge (x \wedge \nabla) \\ &= (x \wedge \nabla)^2 - (n-2) (x \wedge \nabla) - (x \wedge \nabla) \wedge (x \wedge \nabla) \\ &= (-(n-2) + (x \wedge \nabla) - (x \wedge \nabla) \wedge ) (x \wedge \nabla)  \end{aligned}

The complete Laplacian is

\begin{aligned}\nabla^2 &= \frac{1}{{x^2}} (x \cdot \nabla)^2 + (n - 2) \frac{1}{{x}} \cdot \nabla - \frac{1}{{x^2}} \left((x \wedge \nabla)^2 - (n-2) (x \wedge \nabla) - (x \wedge \nabla) \wedge (x \wedge \nabla) \right) \end{aligned} \quad\quad\quad(12)

In particular in less than four dimensions the quad-vector term is necessarily zero. The 3D Laplacian becomes

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} (1 + \mathbf{x} \cdot \boldsymbol{\nabla})(\mathbf{x} \cdot \boldsymbol{\nabla})+ \frac{1}{{\mathbf{x}^2}} (1 - \mathbf{x} \wedge \boldsymbol{\nabla}) (\mathbf{x} \wedge \boldsymbol{\nabla})  \end{aligned} \quad\quad\quad(13)

So any eigenfunction of the bivector angular momentum operator \mathbf{x} \wedge \boldsymbol{\nabla} is necessarily a simultaneous eigenfunction of the scalar operator.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

Angular momentum polar form, factoring out the raising and lowering operators, and simultaneous eigenvalues.

Posted by peeterjoot on August 30, 2009

Continuation of ‘Bivector form of quantum angular momentum operator’ notes.

[Click here for a PDF of this sequence of posts with nicer formatting]

After a bit more manipulation we find that the angular momentum operator polar form representation, again using i = \mathbf{e}_1 \mathbf{e}_2, is

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} = I \hat{\boldsymbol{\phi}} ( \partial_\theta + i \cot\theta \partial_\phi + \mathbf{e}_{23} e^{i\phi} \partial_\phi ) \end{aligned} \quad\quad\quad(54)

Observe how similar the exponential free terms within the braces are to the raising operator as given in Bohm’s equation (14.40)

\begin{aligned}L_x + i L_y &= e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi ) \\ L_z &= \frac{1}{{i}} \partial_\phi  \end{aligned} \quad\quad\quad(55)

In fact since \mathbf{e}_{23}e^{i\phi} = e^{-i\phi} \mathbf{e}_{23}, the match can be made even closer

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} = I \hat{\boldsymbol{\phi}} e^{-i\phi} ( \underbrace{e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi)}_{= L_x + i L_y} + \mathbf{e}_{13} \underbrace{\frac{1}{{i}} \partial_\phi}_{=L_z} ) \end{aligned} \quad\quad\quad(57)

This is a surprising factorization, but noting that \hat{\boldsymbol{\phi}} = \mathbf{e}_2 e^{i\phi} we have

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} = \mathbf{e}_{31} ( e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) + \mathbf{e}_{13} \frac{1}{{i}} \partial_\phi ) \end{aligned} \quad\quad\quad(58)

It appears that the factoring out from the left of a unit bivector (in this case \mathbf{e}_{31}) from the bivector angular momentum operator, leaves as one of the remainders the raising operator.

Similarily, noting that \mathbf{e}_{13} anticommutes with i = \mathbf{e}_{12}, we have the right factorization

\begin{aligned}\mathbf{x} \wedge \boldsymbol{\nabla} = ( e^{-i\phi} (\partial_\theta - i \cot\theta \partial_\phi) - \mathbf{e}_{13} \frac{1}{{i}} \partial_\phi )\mathbf{e}_{31} \end{aligned} \quad\quad\quad(59)

Now in the remainder, we see the polar form representation of the lowering operator L_x - i L_y = e^{-i\phi}(\partial_\theta - i\cot\theta \partial_\phi).

I wasn’t expecting the raising and lowering operators “to fall out” as they did by simply expressing the complete bivector operator in polar form. This is actually fortunitous since it shows why this peculiar combination is of interest.

If we find a zero solution to the raising or lowering operator, that is also a solution of the eigenproblem (\partial_\phi - \lambda) \psi = 0, then this is neccessarily also an eigensolution of \mathbf{x} \wedge \boldsymbol{\nabla}. A secondary implication is that this is then also an eigensolution of \left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle \psi = \lambda' \psi. This was the starting point in Bohm’s quest for the spherical harmonics, but why he started there wasn’t clear to me.

Saying this without the words, let’s look for eigenfunctions for the non-raising portion of (58). That is

\begin{aligned}\mathbf{e}_{31} \mathbf{e}_{13} \frac{1}{{i}} \partial_\phi f = \lambda f \end{aligned} \quad\quad\quad(60)

Since \mathbf{e}_{31} \mathbf{e}_{13} = 1 we want solutions of

\begin{aligned}\partial_\phi f = i \lambda f \end{aligned} \quad\quad\quad(61)

Solutions are

\begin{aligned}f = \kappa(\theta) e^{i\lambda \phi} \end{aligned} \quad\quad\quad(62)

A demand that this is a zero eigenfunction for the raising operator, means we are looking for solutions of

\begin{aligned}\mathbf{e}_{31} e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) \kappa(\theta) e^{i\lambda \phi} = 0 \end{aligned} \quad\quad\quad(63)

It is sufficient to find zero eigenfunctions of

\begin{aligned}(\partial_\theta + i \cot\theta \partial_\phi) \kappa(\theta) e^{i\lambda \phi} = 0 \end{aligned} \quad\quad\quad(64)

Evaluation of the \phi partials and rearrangement leaves us with an equation in \theta only

\begin{aligned}\frac{\partial \kappa }{\partial \theta} = \lambda \cot\theta \kappa \end{aligned} \quad\quad\quad(65)

This has solutions \kappa = A(\phi) (\sin\theta)^\lambda, where because of the partial derivatives in (65) we are free to make the integration constant a function of \phi. Since this is the functional dependence that is a zero of the raising operator, including this at the \theta dependence of (62) means that we have a simultaneous zero of the raising operator, and an eigenfunction of eigenvalue \lambda for the remainder of the angular momentum operator.

\begin{aligned}f(\theta,\phi) = (\sin\theta)^\lambda e^{i\lambda \phi} \end{aligned} \quad\quad\quad(66)

This is very similar seeming to the process of adding homogeneous solutions to specific ones, since we augment the specific eigenvalued solutions for one part of the operator by ones that produce zeros for the rest.

As a check lets apply the angular momentum operator to this as a test and see if the results match our expectations.

\begin{aligned}(\mathbf{x} \wedge \boldsymbol{\nabla} ) (\sin\theta)^\lambda e^{i\lambda \phi}&=\hat{\mathbf{r}} \left( \hat{\boldsymbol{\theta}} \partial_\theta + \hat{\boldsymbol{\phi}} \frac{1}{{\sin\theta}} \partial_\phi \right)  (\sin\theta)^\lambda e^{i\lambda \phi} \\ &=\hat{\mathbf{r}} \left( \hat{\boldsymbol{\theta}} \lambda (\sin\theta)^{\lambda-1} \cos\theta + \hat{\boldsymbol{\phi}} \frac{1}{{\sin\theta}} (\sin\theta)^\lambda (i\lambda)\right) e^{i\lambda \phi} \\ &=\lambda \hat{\mathbf{r}} \left( \hat{\boldsymbol{\theta}} \cos\theta + \hat{\boldsymbol{\phi}} i \right) e^{i\lambda \phi} (\sin\theta)^{\lambda-1}  \\  \end{aligned}

From (38) we have

\begin{aligned}\hat{\mathbf{r}} \hat{\boldsymbol{\phi}} i &= \mathbf{e}_3 \hat{\boldsymbol{\phi}} i \cos\theta - \sin\theta \\ &= \mathbf{e}_{32} i e^{i\phi} \cos\theta - \sin\theta \\ &= \mathbf{e}_{13} e^{i\phi} \cos\theta - \sin\theta \\  \end{aligned}

and from (37) we have

\begin{aligned}\hat{\mathbf{r}} \hat{\boldsymbol{\theta}}&= I \hat{\boldsymbol{\phi}}  \\ &= \mathbf{e}_{31} e^{i\phi} \end{aligned}

Putting these together shows that (\sin\theta)^\lambda e^{i\lambda \phi} is an eigenfunction of \mathbf{x} \wedge \boldsymbol{\nabla},

\begin{aligned}(\mathbf{x} \wedge \boldsymbol{\nabla} ) (\sin\theta)^\lambda e^{i\lambda \phi} = -\lambda (\sin\theta)^\lambda e^{i\lambda \phi} \end{aligned} \quad\quad\quad(67)

This negation suprised me at first, but I don’t see any errors here in the arithmetic. Observe that if this is correct, then it provides a demonstration that the previous suspected calculation leading to (7) is in fact wrong as guessed. That suspected incorrect result, a product of very messy calculation, was

\begin{aligned}\left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle \stackrel{?}{=} \left( \mathbf{x} \wedge \boldsymbol{\nabla} - \frac{1}{{2}} \right) (\mathbf{x} \wedge \boldsymbol{\nabla})  \end{aligned} \quad\quad\quad(68)

the one half factor seemed unasthetic, with the following somehow preferable

\begin{aligned}\left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle \stackrel{?}{=} \left( \mathbf{x} \wedge \boldsymbol{\nabla} - 1 \right) (\mathbf{x} \wedge \boldsymbol{\nabla})  \end{aligned} \quad\quad\quad(69)

If (67) is the correct version then calculating the operator effect of \left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle for the eigenvalue we have

\begin{aligned}\left\langle{{(\mathbf{x} \wedge \boldsymbol{\nabla})^2}}\right\rangle (\sin\theta)^\lambda e^{i\lambda \phi}&=\left( \mathbf{x} \wedge \boldsymbol{\nabla} - 1 \right) (\mathbf{x} \wedge \boldsymbol{\nabla}) (\sin\theta)^\lambda e^{i\lambda \phi} \\ &=((-\lambda)^2 - (-\lambda)) (\sin\theta)^\lambda e^{i\lambda \phi} \\  \end{aligned}

So the eigenvalue is \lambda(\lambda + 1). This we do know to be the case in fact, so a second look at the messy algebra leading to (68) is justified (or an attempt at a coordinate free expansion).

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »