Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘change of variables’

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

Cartesian to spherical change of variables in 3d phase space

Posted by peeterjoot on February 11, 2013

[Click here for a PDF of this post with nicer formatting]

Question: Cartesian to spherical change of variables in 3d phase space

[1] problem 2.2 (a). Try a spherical change of vars to verify explicitly that phase space volume is preserved.

Answer

Our kinetic Lagrangian in spherical coordinates is

\begin{aligned}\mathcal{L} &= \frac{1}{{2}} m \left(  \dot{r} \hat{\mathbf{r}} + r \sin\theta \dot{\phi} \hat{\boldsymbol{\phi}} + r \dot{\theta} \hat{\boldsymbol{\theta}} \right)^2 \\ &= \frac{1}{{2}} m \left(  \dot{r}^2 + r^2 \sin^2\theta \dot{\phi}^2 + r^2 \dot{\theta}^2  \right)\end{aligned} \hspace{\stretch{1}}(1.0.1)

We read off our canonical momentum

\begin{aligned}p_r &= \frac{\partial {\mathcal{L}}}{\partial {r}} \\ &= m \dot{r}\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}p_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\theta}} \\ &= m r^2 \dot{\theta}\end{aligned} \hspace{\stretch{1}}(1.0.2b)

\begin{aligned}p_\phi &= \frac{\partial {\mathcal{L}}}{\partial {\phi}} \\ &= m r^2 \sin^2\theta \dot{\phi},\end{aligned} \hspace{\stretch{1}}(1.0.2c)

and can now express the Hamiltonian in spherical coordinates

\begin{aligned}H &= \frac{1}{{2}} m \left(\left( \frac{p_r}{m} \right)^2+ r^2 \sin^2\theta \left( \frac{p_\phi}{m r^2 \sin^2\theta} \right)+ r^2 \left( \frac{p_\theta}{m r^2} \right)\right) \\ &= \frac{p_r^2}{2m} + \frac{p_\phi^2}{2 m r^2 \sin^2\theta} + \frac{p_\theta^2}{2 m r^2}\end{aligned} \hspace{\stretch{1}}(1.0.3)

Now we want to do a change of variables. The coordinates transform as

\begin{aligned}x = r \sin\theta \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}y = r \sin\theta \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}z = r \cos\theta,\end{aligned} \hspace{\stretch{1}}(1.0.4c)

or

\begin{aligned}r = \sqrt{x^2 + y^2 + z^2}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\theta = \arccos(z/r)\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}\phi = \arctan(y/x).\end{aligned} \hspace{\stretch{1}}(1.0.5c)

It’s not too hard to calculate the change of variables for the momenta (verified in sphericalPhaseSpaceChangeOfVars.nb). We have

\begin{aligned}p_r = \frac{x p_x + y p_y + z p_z}{\sqrt{x^2 + y^2 + z^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}p_\theta = \frac{(p_x x + p_y y) z - p_z (x^2 + y^2)}{\sqrt{x^2 + y^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

\begin{aligned}p_\phi = x p_y - y p_x\end{aligned} \hspace{\stretch{1}}(1.0.6c)

Now let’s compute the volume element in spherical coordinates. This is

\begin{aligned}d\omega &= dr d\theta d\phi dp_r dp_\theta dp_\phi \\ &= \frac{\partial(r, \theta, \phi, p_r, p_\theta, p_\phi)}{\partial(x, y, z, p_x, p_y, p_z)}dx dy dz dp_x dp_y dp_z \\ &= \begin{vmatrix} \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} & 0 & 0 & 0 \\  \frac{x z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & \frac{y z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & -\frac{\sqrt{x^2+y^2}}{x^2+y^2+z^2} & 0 & 0 & 0 \\  -\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2} & 0 & 0 & 0 & 0 \\  \frac{\left(y^2+z^2\right) p_x-x y p_y-x z p_z}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+z^2\right) p_y-y \left(x p_x+z p_z\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+y^2\right) p_z-z \left(x p_x+y p_y\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} \\  \frac{y z \left(y p_x-x p_y\right)-x \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x z \left(x p_y-y p_x\right)-y \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x p_x+y p_y}{\sqrt{x^2+y^2}} & \frac{x z}{\sqrt{x^2+y^2}} & \frac{y z}{\sqrt{x^2+y^2}} & -\sqrt{x^2+y^2} \\  p_y & -p_x & 0 & -y & x & 0 \\ \end{vmatrix}dx dy dz dp_x dp_y dp_z \\ &= dx dy dz dp_x dp_y dp_z\end{aligned} \hspace{\stretch{1}}(1.0.7)

This also has a unit determinant, as we found in the similar cylindrical change of phase space variables.

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , | 4 Comments »

PHY354 Advanced Classical Mechanics. Problem set 1 (ungraded).

Posted by peeterjoot on February 3, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Ungraded solutions to posted problem set 1 (I’m auditing half the lectures for this course and won’t be submitting any solutions for grading).

Problem 1. Lorentz force Lagrangian.

Evaluate the Euler-Lagrange equations.

This problem has two parts. The first is to derive the Lorentz force equation

\begin{aligned}\mathbf{F} &= q (\mathbf{E} + \mathbf{v} \times \mathbf{B}) \\ \mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.1)

using the Euler-Lagrange equations using the Lagrangian

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \mathbf{v}^2 + q \mathbf{v} \cdot \mathbf{A} - q \phi.\end{aligned} \hspace{\stretch{1}}(2.4)

In coordinates, employing summation convention, this Lagrangian is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \dot{x}_j \dot{x}_j + q \dot{x}_j A_j - q \phi.\end{aligned} \hspace{\stretch{1}}(2.5)

Taking derivatives

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{x}_i}} = m \dot{x}_i + q A_i,\end{aligned} \hspace{\stretch{1}}(2.6)

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{x}_i}} &= m \dot{d}{x}_i + q \frac{\partial {A_i}}{\partial {t}}+ q \frac{\partial {A_i}}{\partial {x_j}} \frac{dx_j}{dt} \\ &=m \dot{d}{x}_i + q \frac{\partial {A_i}}{\partial {t}}+ q \frac{\partial {A_i}}{\partial {x_j}} \dot{x}_j\end{aligned}

This must equal

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {x_i}} = q \dot{x}_j \frac{\partial {A_j}}{\partial {x_i}} - q \frac{\partial {\phi}}{\partial {x_i}},\end{aligned} \hspace{\stretch{1}}(2.7)

So we have

\begin{aligned}m \dot{d}{x}_i &= -q \frac{\partial {A_i}}{\partial {t}}- q \frac{\partial {A_i}}{\partial {x_j}} \dot{x}_j+q \dot{x}_j \frac{\partial {A_j}}{\partial {x_i}} - q \frac{\partial {\phi}}{\partial {x_i}} \\ &=-q \left( \frac{\partial {A_i}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x_i}} \right)+q v_j \left( \frac{\partial {A_j}}{\partial {x_i}} - \frac{\partial {A_i}}{\partial {x_j}} \right)\end{aligned}

The first term is just E_i. If we expand out (\mathbf{v} \times \mathbf{B})_i we see that matches

\begin{aligned}(\mathbf{v} \times \mathbf{B})_i&=v_a B_b \epsilon_{abi} \\ &=v_a \partial_r A_s \epsilon_{rsb} \epsilon_{abi} \\ &=v_a \partial_r A_s \delta_{rs}^{[ia]} \\ &=v_a (\partial_i A_a - \partial_a A_i).\end{aligned}

A a \rightarrow j substition, and comparision of this with the Euler-Lagrange result above completes the exersize.

Show that the Lagrangian is gauge invariant.

With a gauge transformation of the form

\begin{aligned}\phi &\rightarrow \phi + \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &\rightarrow \mathbf{A} - \boldsymbol{\nabla} \chi,\end{aligned} \hspace{\stretch{1}}(2.8)

show that the Lagrangian is invariant.

We really only have to show that

\begin{aligned}\mathbf{v} \cdot \mathbf{A} - \phi\end{aligned} \hspace{\stretch{1}}(2.10)

is invariant. Making the transformation we have

\begin{aligned}\mathbf{v} \cdot \mathbf{A} - \phi&\rightarrow v_j \left(A_j - \partial_j \chi \right) - \left(\phi + \frac{\partial {\chi}}{\partial {t}} \right) \\ &=v_j A_j - \phi - v_j \partial_j \chi - \frac{\partial {\chi}}{\partial {t}} \\ &=\mathbf{v} \cdot \mathbf{A} - \phi- \left( \frac{d x_j}{dt} \frac{\partial {\chi}}{\partial {x_j}} + \frac{\partial {\chi}}{\partial {t}} \right) \\ &=\mathbf{v} \cdot \mathbf{A} - \phi- \frac{d \chi(\mathbf{x}, t)}{dt}.\end{aligned}

We see then that the Lagrangian transforms as

\begin{aligned}\mathcal{L} \rightarrow \mathcal{L} + \frac{d}{dt}\left( -q \chi \right),\end{aligned} \hspace{\stretch{1}}(2.11)

and differs only by a total derivative. With the lemma from the lecture, we see that this gauge transformation does not have any effect on the end result of applying the Euler-Lagrange equations.

Problem 2. Action minimization problem for surface gravity.

Here we are told to guess at a solution

\begin{aligned}y = a_2 t^2 + a_1 t + a_0,\end{aligned} \hspace{\stretch{1}}(3.12)

for the height of a particle thrown up into the air. With initial condition y(0) = 0 we have

\begin{aligned}a_0 = 0,\end{aligned} \hspace{\stretch{1}}(3.13)

and with a final condition of y(T) = 0 we also have

\begin{aligned}0 &= a_2 T^2 + a_1 T \\ &= T( a_2 T + a_1 ),\end{aligned}

so have

\begin{aligned}y(t) &= a_2 t^2 - a_2 T t = a_2 (t^2 - T t) \\ \dot{y}(t) &= a_2 (2 t - T )\end{aligned} \hspace{\stretch{1}}(3.14)

So our Lagrangian is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m a_2^2 (2 t - T )^2 - m g a_2 (t^2 - T t)\end{aligned} \hspace{\stretch{1}}(3.16)

and our action is

\begin{aligned}S = \int_0^T dt \left( \frac{1}{{2}} m a_2^2 (2 t - T )^2 - m g a_2 (t^2 - T t)\right).\end{aligned} \hspace{\stretch{1}}(3.17)

To minimize this action with respect to a_2 we take the derivative

\begin{aligned}\frac{\partial {S}}{\partial {a_2}} = \int_0^T\left( m a_2 (2 t - T )^2 - m g (t^2 - T t)\right).\end{aligned} \hspace{\stretch{1}}(3.18)

Integrating we have

\begin{aligned}0 &= \frac{\partial {S}}{\partial {a_2}} \\ &={\left.\left(\frac{1}{{6}} m a_2 (2 t - T )^3 - m g \left(\frac{1}{{3}}t^3 - \frac{1}{{2}}T t^2 \right)\right)\right\vert}_0^T \\ &=\frac{1}{{6}} m a_2 T^3 - m g \left(\frac{1}{{3}}T^3 - \frac{1}{{2}}T^3 \right)-\frac{1}{{6}} m a_2 (- T )^3 \\ &=m T^3 \left( \frac{1}{{3}} a_2 - g \left( \frac{1}{{3}} - \frac{1}{{2}} \right) \right) \\ &=\frac{1}{{3}} m T^3 \left( a_2 - g \left( 1 - \frac{3}{2} \right) \right) \\ \end{aligned}

or

\begin{aligned}a_2 + g/2 = 0,\end{aligned} \hspace{\stretch{1}}(3.19)

which is the result we are required to show.

Problem 3. Change of variables in a Lagrangian.

Here we want to show that after a change of variables, provided such a transformation is non-singular, the Euler-Lagrange equations are still valid.

Let’s write

\begin{aligned}r_i = r_i(q_1, q_2, \cdots q_N).\end{aligned} \hspace{\stretch{1}}(4.20)

Our “velocity” variables in terms of the original parameterization q_i are

\begin{aligned}\dot{r}_j = \frac{dr_j}{dt} = \frac{\partial {r_j}}{\partial {q_i}} \frac{\partial {q_i}}{\partial {t}} = \dot{q}_i \frac{\partial {r_j}}{\partial {q_i}},\end{aligned} \hspace{\stretch{1}}(4.21)

so we have

\begin{aligned}\frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}} = \frac{\partial {r_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.22)

Computing the LHS of the Euler Lagrange equation we find

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {q_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.23)

For our RHS we start with

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{q}_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}}= \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}},\end{aligned} \hspace{\stretch{1}}(4.24)

but {\partial {r_j}}/{\partial {\dot{q}_i}} = 0, so this is just

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{q}_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}}= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.25)

The Euler-Lagrange equations become

\begin{aligned}0 &=\frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}- \frac{d{{}}}{dt} \left(\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}}\right) \\ &=   \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+ \not{{\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}}}- \left( \frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \right) \frac{\partial {r_j}}{\partial {q_i}}- \not{{\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{d{{}}}{dt} \frac{\partial {r_j}}{\partial {q_i}} }}\\ &=\left( \frac{\partial {\mathcal{L}}}{\partial {r_j}} -\frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \right) \frac{\partial {r_j}}{\partial {q_i}}\end{aligned}

Since we have an assumption that the transformation is non-singular, we have for all j

\begin{aligned}\frac{\partial {r_j}}{\partial {q_i}} \ne 0,\end{aligned} \hspace{\stretch{1}}(4.26)

so we have the Euler-Lagrange equations for the new abstract coordinates as well

\begin{aligned}0 = \frac{\partial {\mathcal{L}}}{\partial {r_j}} -\frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}}.\end{aligned} \hspace{\stretch{1}}(4.27)

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

Jacobians and spherical polar gradient

Posted by peeterjoot on December 8, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation

The dumbest and most obvious way to do a chain of variables for the gradient is to utilize a chain rule expansion producing the Jacobian matrix to transform the coordinates. Here we do this to calculate the spherical polar representation of the gradient.

There are smarter and easier ways to do this, but there is some surprising simple structure to the resulting Jacobians that seems worth noting.

Spherical polar gradient coordinates in terms of Cartesian.

We wish to do a change of variables for each of the differential operators of the gradient. This is essentially just application of the chain rule, as in

\begin{aligned}\frac{\partial {}}{\partial {r}} = \frac{\partial {x}}{\partial {r}} \frac{\partial {}}{\partial {x}}+\frac{\partial {y}}{\partial {r}} \frac{\partial {}}{\partial {y}}+\frac{\partial {z}}{\partial {r}} \frac{\partial {}}{\partial {z}}.\end{aligned} \quad\quad\quad(1)

Collecting all such derivatives we have in column vector form

\begin{aligned}\begin{bmatrix}\partial_r \\ \partial_\theta \\ \partial_\phi\end{bmatrix}= \begin{bmatrix}\frac{\partial {x}}{\partial {r}} &\frac{\partial {y}}{\partial {r}} &\frac{\partial {z}}{\partial {r}}  \\ \frac{\partial {x}}{\partial {\theta}} &\frac{\partial {y}}{\partial {\theta}} &\frac{\partial {z}}{\partial {\theta}}  \\ \frac{\partial {x}}{\partial {\phi}} &\frac{\partial {y}}{\partial {\phi}} &\frac{\partial {z}}{\partial {\phi}} \end{bmatrix}\begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix}.\end{aligned} \quad\quad\quad(2)

This becomes a bit more tractable with the Jacobian notation

\begin{aligned}\frac{\partial (x,y,z)}{\partial (r,\theta,\phi)}=\begin{bmatrix}\frac{\partial {x}}{\partial {r}} &\frac{\partial {y}}{\partial {r}} &\frac{\partial {z}}{\partial {r}}  \\ \frac{\partial {x}}{\partial {\theta}} &\frac{\partial {y}}{\partial {\theta}} &\frac{\partial {z}}{\partial {\theta}}  \\ \frac{\partial {x}}{\partial {\phi}} &\frac{\partial {y}}{\partial {\phi}} &\frac{\partial {z}}{\partial {\phi}}\end{bmatrix}.\end{aligned} \quad\quad\quad(3)

The change of variables for the operator triplet is then just

\begin{aligned}\begin{bmatrix}\partial_r \\ \partial_\theta \\ \partial_\phi\end{bmatrix}= \frac{\partial (x,y,z)}{\partial (r,\theta,\phi)}\begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix}.\end{aligned} \quad\quad\quad(4)

This Jacobian matrix is also not even too hard to calculate. With \mathbf{x} = r \hat{\mathbf{r}}, we have x_k = r \hat{\mathbf{r}} \cdot \mathbf{e}_k, and

\begin{aligned}\frac{\partial {x_k}}{\partial {r}} &= \hat{\mathbf{r}} \cdot \mathbf{e}_k \\ \frac{\partial {x_k}}{\partial {\theta}} &= r \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \mathbf{e}_k \\ \frac{\partial {x_k}}{\partial {\phi}} &= r \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \cdot \mathbf{e}_k.\end{aligned} \quad\quad\quad(5)

The last two derivatives can be calculated easily if the radial unit vector is written out explicitly, with S and C for sine and cosine respectively, these are

\begin{aligned}\hat{\mathbf{r}} &= \begin{bmatrix}S_\theta C_\phi \\ S_\theta S_\phi \\ C_\theta \end{bmatrix} \\ \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} &= \begin{bmatrix}C_\theta C_\phi \\ C_\theta S_\phi \\ -S_\theta \end{bmatrix} \\ \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} &= \begin{bmatrix}-S_\theta S_\phi \\ S_\theta C_\phi \\ 0\end{bmatrix} .\end{aligned} \quad\quad\quad(8)

We can plug these into the elements of the Jacobian matrix explicitly, which produces

\begin{aligned}\frac{\partial (x,y,z)}{\partial (r,\theta,\phi)}=\begin{bmatrix} S_\theta C_\phi & S_\theta S_\phi & C_\theta \\ r C_\theta C_\phi & r C_\theta S_\phi & - r S_\theta \\ -r S_\theta S_\phi & rS_\theta C_\phi & 0\end{bmatrix},\end{aligned} \quad\quad\quad(11)

however, we are probably better off just referring back to 8, and writing

\begin{aligned}\frac{\partial (x,y,z)}{\partial (r,\theta,\phi)}=\begin{bmatrix} \hat{\mathbf{r}}^\text{T} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\theta}} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\phi}} \end{bmatrix}.\end{aligned} \quad\quad\quad(12)

Unfortunately, this is actually a bit of a dead end. We really want the inverse of this matrix because the desired quantity is

\begin{aligned}\boldsymbol{\nabla} = \begin{bmatrix}\mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3  \end{bmatrix}\begin{bmatrix}\partial_{x_1} \\ \partial_{x_2} \\ \partial_{x_3}\end{bmatrix}.\end{aligned} \quad\quad\quad(13)

(Here my matrix of unit vectors treats these abusively as single elements and not as column vectors).

The matrix of equation 12 does not look particularly fun to invert directly, and that is what we need to substitute into
13. One knows that in the end if it was attempted things should mystically simplify (presuming this was done error free).

Cartesian gradient coordinates in terms of spherical polar partials.

Let’s flip things upside down and calculate the inverse Jacobian matrix directly. This is a messier job, but it appears less messy than the matrix inversion above.

\begin{aligned}r^2 &= x^2 + y^2 + z^2  \\ \sin^2 \theta &= \frac{x^2 + y^2}{x^2 + y^2 + z^2} \\ \tan\phi &= \frac{y}{x}.\end{aligned} \quad\quad\quad(14)

The messy task is now the calculation of these derivatives.

For the first, from r^2 = x^2 + y^2 + z^2, taking partials on both sides, we have

\begin{aligned}\frac{\partial {r}}{\partial {x_k}} = \frac{x_k}{r}.\end{aligned} \quad\quad\quad(17)

But these are just the direction cosines, the components of our polar unit vector \hat{\mathbf{r}}. We can then write for all of these derivatives in column matrix form

\begin{aligned}\boldsymbol{\nabla} r = \hat{\mathbf{r}}\end{aligned} \quad\quad\quad(18)

Next from \sin^2\theta = (x^2 + y^2)/r^2, we get after some reduction

\begin{aligned}\frac{\partial {\theta}}{\partial {x}} &= \frac{1}{{r}} C_\theta C_\phi \\ \frac{\partial {\phi}}{\partial {y}} &= \frac{1}{{r}} C_\theta S_\phi \\ \frac{\partial {\phi}}{\partial {z}} &= -\frac{S_\theta}{r}.\end{aligned} \quad\quad\quad(19)

Observe that we can antidifferentiate with respect to theta and obtain

\begin{aligned}\boldsymbol{\nabla} \theta &= \frac{1}{{r}}\begin{bmatrix}C_\theta C_\phi \\ C_\theta S_\phi \\ -S_\theta\end{bmatrix} \\ &=\frac{1}{{r}}\frac{\partial {}}{\partial {\theta}}\begin{bmatrix}S_\theta C_\phi \\ S_\theta S_\phi \\ C_\theta\end{bmatrix}.\end{aligned}

This last column vector is our friend the unit polar vector again, and we have

\begin{aligned}\boldsymbol{\nabla} \theta &= \frac{1}{{r}}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}\end{aligned} \quad\quad\quad(22)

Finally for the \phi dependence we have after some reduction

\begin{aligned}\boldsymbol{\nabla} \phi &=\frac{1}{{r S_\theta}}\begin{bmatrix}-S_\phi \\ C_\phi \\ 0\end{bmatrix}.\end{aligned} \quad\quad\quad(23)

Again, we can antidifferentiate

\begin{aligned}\boldsymbol{\nabla} \phi &=\frac{1}{{r (S_\theta)^2}}\begin{bmatrix}-S_\theta S_\phi \\ S_\theta C_\phi \\ 0\end{bmatrix} \\ &=\frac{1}{{r (S_\theta)^2}}\frac{\partial {}}{\partial {\phi}}\begin{bmatrix}S_\theta C_\phi \\ S_\theta S_\phi \\ C_\theta\end{bmatrix}.\end{aligned}

We have our unit polar vector again, and our \phi partials nicely summarized by

\begin{aligned}\boldsymbol{\nabla} \phi &=\frac{1}{{r (S_\theta)^2}}\frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}.\end{aligned} \quad\quad\quad(24)

With this we can now write out the Jacobian matrix either explicitly, or in column vector form in terms of \hat{\mathbf{r}}. First a reminder of why we want this matrix, for the following change of variables

\begin{aligned}\begin{bmatrix}\partial_x \\ \partial_y \\ \partial_z\end{bmatrix}= \begin{bmatrix}\frac{\partial {r}}{\partial {x}} &\frac{\partial {\theta}}{\partial {x}} &\frac{\partial {\phi}}{\partial {x}}  \\ \frac{\partial {r}}{\partial {y}} &\frac{\partial {\theta}}{\partial {y}} &\frac{\partial {\phi}}{\partial {y}}  \\ \frac{\partial {r}}{\partial {z}} &\frac{\partial {\theta}}{\partial {z}} &\frac{\partial {\phi}}{\partial {z}} \end{bmatrix}\begin{bmatrix}\partial_r \\ \partial_\theta \\ \partial_\phi\end{bmatrix}.\end{aligned} \quad\quad\quad(25)

We want the Jacobian matrix

\begin{aligned}\frac{\partial (r,\theta,\phi)}{\partial (x, y, z)}=\begin{bmatrix}\boldsymbol{\nabla} r & \boldsymbol{\nabla} \theta & \boldsymbol{\nabla} \phi\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{r}} & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\end{bmatrix}.\end{aligned} \quad\quad\quad(26)

Explicitly this is

\begin{aligned}\frac{\partial (r,\theta,\phi)}{\partial (x, y, z)}=\begin{bmatrix}S_\theta C_\phi & \frac{1}{{r}} C_\theta C_\phi & -\frac{1}{{r S_\theta}} S_\phi \\ S_\theta S_\phi & \frac{1}{{r}} C_\theta S_\phi & \frac{C_\phi}{r S_\theta} \\ C_\theta        & -\frac{1}{{r}} S_\theta       &  0\end{bmatrix}.\end{aligned} \quad\quad\quad(27)

As a verification of correctness multiplication of this with 11 should produce identity. That’s a mess of trig that I don’t really feel like trying, but we can get a rough idea why it should all be the identity matrix by multiplying it out in block matrix form

\begin{aligned}\frac{\partial (x,y,z)}{\partial (r,\theta,\phi)}\frac{\partial (r,\theta,\phi)}{\partial (x, y, z)}&=\begin{bmatrix} \hat{\mathbf{r}}^\text{T} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\theta}} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\phi}} \end{bmatrix}\begin{bmatrix}\hat{\mathbf{r}} & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\end{bmatrix} \\ &=\begin{bmatrix} \hat{\mathbf{r}}^\text{T} \hat{\mathbf{r}}                & \frac{1}{{r}} \hat{\mathbf{r}}^\text{T} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}      & \frac{1}{{r \sin^2 \theta}} \hat{\mathbf{r}}^\text{T} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\theta}} \hat{\mathbf{r}} & \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} & \frac{1}{{\sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \\ r \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\phi}} \hat{\mathbf{r}}   & \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\phi}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}}   & \frac{1}{{\sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}^\text{T}}}{\partial {\phi}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}}\end{bmatrix}.\end{aligned}

The derivatives are vectors that lie tangential to the unit sphere. We can calculate this to verify, or we can look at the off diagonal terms which say just this if we trust the math that says these should all be zeros. For each of the off diagonal terms to be zero must mean that we have

\begin{aligned}0 = \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \hat{\mathbf{r}} = \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} = \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \cdot \hat{\mathbf{r}} \end{aligned} \quad\quad\quad(28)

This makes intuitive sense. We can also verify quickly enough that ({\partial {\hat{\mathbf{r}}}}/{\partial {\theta}})^2 = 1, and ({\partial {\hat{\mathbf{r}}}}/{\partial {\phi}})^2 = \sin^2\theta (I did this with a back of the envelope calculation using geometric algebra). That is consistent with what this matrix product implies it should equal.

Completing the gradient change of variables to spherical polar coordinates.

We are now set to calculate the gradient in spherical polar coordinates from our Cartesian representation. From 13 and
25, and 26 we have

\begin{aligned}\boldsymbol{\nabla} =\begin{bmatrix}\mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3  \end{bmatrix}\begin{bmatrix}\hat{\mathbf{r}} \cdot \mathbf{e}_1 & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \mathbf{e}_1 & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \cdot \mathbf{e}_1 \\ \hat{\mathbf{r}} \cdot \mathbf{e}_2 & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \mathbf{e}_2 & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \cdot \mathbf{e}_2 \\ \hat{\mathbf{r}} \cdot \mathbf{e}_3 & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \cdot \mathbf{e}_3 & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \cdot \mathbf{e}_3 \end{bmatrix}\begin{bmatrix}\partial_r \\ \partial_\theta \\ \partial_\phi\end{bmatrix}.\end{aligned} \quad\quad\quad(29)

The Jacobian matrix has been written out explicitly as scalars because we are now switching to an abusive notation using matrices of vector elements. Our Jacobian, a matrix of scalars happened to have a nice compact representation in column vector form, but we cannot use this when multiplying out with our matrix elements (or perhaps could if we invented more conventions, but lets avoid that). Having written it out in full we see that we recover our original compact Jacobian representation, and have just

\begin{aligned}\boldsymbol{\nabla} = \begin{bmatrix}\hat{\mathbf{r}} & \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} & \frac{1}{{r \sin^2\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \end{bmatrix}\begin{bmatrix}\partial_r \\ \partial_\theta \\ \partial_\phi\end{bmatrix}.\end{aligned} \quad\quad\quad(30)

Expanding this last product we have the gradient in its spherical polar representation

\begin{aligned}\boldsymbol{\nabla} = \begin{bmatrix}\hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \frac{1}{{r}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \frac{\partial {}}{\partial {\theta}} + \frac{1}{{r \sin\theta}} \frac{1}{{\sin\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}} \frac{\partial {}}{\partial {\phi}}\end{bmatrix}.\end{aligned} \quad\quad\quad(31)

With the labels

\begin{aligned}\hat{\boldsymbol{\theta}} &= \frac{\partial {\hat{\mathbf{r}}}}{\partial {\theta}} \\ \hat{\boldsymbol{\phi}} &= \frac{1}{{\sin\theta}} \frac{\partial {\hat{\mathbf{r}}}}{\partial {\phi}},\end{aligned} \quad\quad\quad(32)

(having confirmed that these are unit vectors), we have the final result for the gradient in this representation

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \frac{1}{{r}} \hat{\boldsymbol{\theta}} \frac{\partial {}}{\partial {\theta}} + \frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \frac{\partial {}}{\partial {\phi}}.\end{aligned} \quad\quad\quad(34)

Here the matrix delimiters for the remaining one by one matrix term were also dropped.

General expression for gradient in orthonormal frames.

Having done the computation for the spherical polar case, we get the result for any orthonormal frame for free. That is just

\begin{aligned}\boldsymbol{\nabla} = \sum_i (\boldsymbol{\nabla} q_i) \frac{\partial {}}{\partial {q_i}}.\end{aligned} \quad\quad\quad(35)

From each of the gradients we can factor out a unit vector in the direction of the gradient, and have an expression that structurally has the same form as 34. Writing \hat{\mathbf{q}}_i = (\boldsymbol{\nabla} q_i)/{\left\lvert{\boldsymbol{\nabla} q_i}\right\rvert}, this is

\begin{aligned}\boldsymbol{\nabla} = \sum_i {\left\lvert{\boldsymbol{\nabla} q_i}\right\rvert} \hat{\mathbf{q}}_i \frac{\partial {}}{\partial {q_i}}.\end{aligned} \quad\quad\quad(36)

These individual direction gradients are not necessarily easy to compute. The procedures outlined in [1] are a more effective way of dealing with this general computational task. However, if we want, we can at proceed this dumb obvious way and be able to get the desired result knowing only how to apply the chain rule, and the Cartesian definition of the gradient.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

On Professor Dmitrevsky’s “the only valid Laplacian definition is the divergence of gradient”.

Posted by peeterjoot on December 2, 2009

[Click here for a PDF of this post with nicer formatting]

Dedication.

To all tyrannical old Professors driven to cruelty by an unending barrage of increasingly ill prepared students.

Motivation.

The text [1] has an excellent general derivation of a number of forms of the gradient, divergence, curl and Laplacian.

This is actually done, not starting with the usual Cartesian forms, but more general definitions.

\begin{aligned}(\text{grad}\  \phi)_i &= \lim_{ds_i \rightarrow 0} \frac{\phi(q_i + dq_i) - \phi(q_i)}{ds_i} \\ \text{div}\  \mathbf{V} &= \lim_{\Delta \tau \rightarrow 0} \frac{1}{{\Delta \tau}} \int_\sigma \mathbf{V} \cdot d\boldsymbol{\sigma} \\ (\text{curl}\  \mathbf{V}) \cdot \mathbf{n} &= \lim_{\Delta \sigma \rightarrow 0} \frac{1}{{\Delta \sigma}} \oint_\lambda \mathbf{V} \cdot d\boldsymbol{\lambda} \\ \text{Laplacian}\  \phi &= \text{div} (\text{grad}\ \phi).\end{aligned} \quad\quad\quad(1)

These are then shown to imply the usual Cartesian definitions, plus provide the means to calculate the general relationships in whatever coordinate system you like. All in all one can’t beat this approach, and I’m not going to try to replicate it, because I can’t improve it in any way by doing so.

Given that, what do I have to say on this topic? Well, way way back in first year electricity and magnetism, my dictator of a prof, the intimidating but diminutive Dmitrevsky, yelled at us repeatedly that one cannot just dot the gradient to form the Laplacian. As far as he was concerned one can only say

\begin{aligned}\text{Laplacian}\  \phi &= \text{div} (\text{grad}\ \phi),\end{aligned} \quad\quad\quad(5)

and never never never, the busted way

\begin{aligned}\text{Laplacian}\  \phi &= (\boldsymbol{\nabla} \cdot \boldsymbol{\nabla}) \phi.\end{aligned} \quad\quad\quad(6)

Because “this only works in Cartesian coordinates”. He probably backed up this assertion with a heartwarming and encouraging statement like “back in the days when University of Toronto was a real school you would have learned this in kindergarten”.

This detail is actually something that has bugged me ever since, because my assumption was that, provided one was careful, why would a change to an alternate coordinate system matter? The gradient is still the gradient, so it seems to me that this ought to be a general way to calculate things.

Here we explore the validity of the dictatorial comments of Prof Dmitrevsky. The key to reconciling intuition and his statement turns out to lie with the fact that one has to let the gradient operate on the unit vectors in the non Cartesian representation as well as the partials, something that wasn’t clear as a first year student. Provided that this is done, the plain old dot product procedure yields the expected results.

This exploration will utilize a two dimensional space as a starting point, transforming from Cartesian to polar form representation. I’ll also utilize a geometric algebra representation of the polar unit vectors.

The gradient in polar form.

Lets start off with a calculation of the gradient in polar form starting with the Cartesian form. Writing \partial_x = {\partial {}}/{\partial {x}}, \partial_y = {\partial {}}/{\partial {y}}, \partial_r = {\partial {}}/{\partial {r}}, and \partial_\theta = {\partial {}}/{\partial {\theta}}, we want to map

\begin{aligned}\boldsymbol{\nabla} = \mathbf{e}_1 \partial_1 + \mathbf{e}_2 \partial_2= \begin{bmatrix}\mathbf{e}_1 & \mathbf{e}_2 \end{bmatrix}\begin{bmatrix}\partial_1 \\ \partial_2 \end{bmatrix},\end{aligned} \quad\quad\quad(7)

into the same form using \hat{\mathbf{r}}, \hat{\boldsymbol{\theta}}, \partial_r, and \partial_\theta. With i = \mathbf{e}_1 \mathbf{e}_2 we have

\begin{aligned}\begin{bmatrix}\mathbf{e}_1 \\ \mathbf{e}_2\end{bmatrix}=e^{i\theta}\begin{bmatrix}\hat{\mathbf{r}} \\ \hat{\boldsymbol{\theta}}\end{bmatrix}.\end{aligned} \quad\quad\quad(8)

Next we need to do a chain rule expansion of the partial operators to change variables. In matrix form that is

\begin{aligned}\begin{bmatrix}\frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y}} \end{bmatrix}= \begin{bmatrix}\frac{\partial {r}}{\partial {x}} &          \frac{\partial {\theta}}{\partial {x}} \\ \frac{\partial {r}}{\partial {y}} &          \frac{\partial {\theta}}{\partial {y}} \end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(9)

To calculate these partials we drop back to coordinates

\begin{aligned}x^2 + y^2 &= r^2 \\ \frac{y}{x} &= \tan\theta \\ \frac{x}{y} &= \cot\theta.\end{aligned} \quad\quad\quad(10)

From this we calculate

\begin{aligned}\frac{\partial {r}}{\partial {x}} &= \cos\theta \\ \frac{\partial {r}}{\partial {y}} &= \sin\theta \\  \frac{1}{{r\cos\theta}} &= \frac{\partial {\theta}}{\partial {y}} \frac{1}{{\cos^2\theta}} \\ \frac{1}{{r\sin\theta}} &= -\frac{\partial {\theta}}{\partial {x}} \frac{1}{{\sin^2\theta}},\end{aligned} \quad\quad\quad(13)

for

\begin{aligned}\begin{bmatrix}\frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y}} \end{bmatrix}= \begin{bmatrix}\cos\theta & -\sin\theta/r \\ \sin\theta & \cos\theta/r\end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(17)

We can now write down the gradient in polar form, prior to final simplification

\begin{aligned}\boldsymbol{\nabla} = e^{i\theta}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}\cos\theta & -\sin\theta/r \\ \sin\theta & \cos\theta/r\end{bmatrix}\begin{bmatrix}\frac{\partial {}}{\partial {r}} \\ \frac{\partial {}}{\partial {\theta}} \end{bmatrix}.\end{aligned} \quad\quad\quad(18)

Observe that we can factor a unit vector

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}=\hat{\mathbf{r}}\begin{bmatrix}1 & i\end{bmatrix}=\begin{bmatrix}i & 1\end{bmatrix}\hat{\boldsymbol{\theta}}\end{aligned} \quad\quad\quad(19)

so the 1,1 element of the matrix product in the interior is

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}\cos\theta \\ \sin\theta \end{bmatrix}=\hat{\mathbf{r}} e^{i\theta} = e^{-i\theta}\hat{\mathbf{r}}.\end{aligned} \quad\quad\quad(20)

Similarly, the 1,2 element of the matrix product in the interior is

\begin{aligned}\begin{bmatrix}\hat{\mathbf{r}} & \hat{\boldsymbol{\theta}}\end{bmatrix}\begin{bmatrix}-\sin\theta/r \\ \cos\theta/r\end{bmatrix}=\frac{1}{{r}} e^{-i\theta} \hat{\boldsymbol{\theta}}.\end{aligned} \quad\quad\quad(21)

The exponentials cancel nicely, leaving after a final multiplication with the polar form for the gradient

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta\end{aligned} \quad\quad\quad(22)

That was a fun way to get the result, although we could have just looked it up. We want to use this now to calculate the Laplacian.

Polar form Laplacian for the plane.

We are now ready to look at the Laplacian. First let’s do it the first year electricity and magnetism course way. We look up the formula for polar form divergence, the one we were supposed to have memorized in kindergarten, and find it to be

\begin{aligned}\text{div}\ \mathbf{A} = \partial_r A_r + \frac{1}{{r}} A_r + \frac{1}{{r}} \partial_\theta A_\theta\end{aligned} \quad\quad\quad(23)

We can now apply this to the gradient vector in polar form which has components \boldsymbol{\nabla}_r = \partial_r, and \boldsymbol{\nabla}_\theta = (1/r)\partial_\theta, and get

\begin{aligned}\text{div}\ \text{grad} = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_{\theta\theta}\end{aligned} \quad\quad\quad(24)

This is the expected result, and what we should get by performing \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} in polar form. Now, let’s do it the wrong way, dotting our gradient with itself.

\begin{aligned}\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} &= \left(\partial_r, \frac{1}{{r}} \partial_\theta\right) \cdot \left(\partial_r, \frac{1}{{r}} \partial_\theta\right) \\ &= \partial_{rr} + \frac{1}{{r}} \partial_\theta \left(\frac{1}{{r}} \partial_\theta\right) \\ &= \partial_{rr} + \frac{1}{{r^2}} \partial_{\theta\theta}\end{aligned}

This is wrong! So is Dmitrevsky right that this procedure is flawed, or do you spot the mistake? I have also cruelly written this out in a way that obscures the error and highlights the source of the confusion.

The problem is that our unit vectors are functions, and they must also be included in the application of our partials. Using the coordinate polar form without explicitly putting in the unit vectors is how we go wrong. Here’s the right way

\begin{aligned}\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} &=\left( \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \cdot \left( \hat{\mathbf{r}} \partial_r + \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \\ &=\hat{\mathbf{r}} \cdot \partial_r \left(\hat{\mathbf{r}} \partial_r \right)+\hat{\mathbf{r}} \cdot \partial_r \left( \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right)+\hat{\boldsymbol{\theta}} \cdot \frac{1}{{r}} \partial_\theta \left( \hat{\mathbf{r}} \partial_r \right)+\hat{\boldsymbol{\theta}} \cdot \frac{1}{{r}} \partial_\theta \left( \hat{\boldsymbol{\theta}} \frac{1}{{r}} \partial_\theta \right) \\ \end{aligned}

Now we need the derivatives of our unit vectors. The \partial_r derivatives are zero since these have no radial dependence, but we do have \theta partials

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &=\partial_\theta \left( \mathbf{e}_1 e^{i\theta} \right) \\ &=\mathbf{e}_1 \mathbf{e}_1 \mathbf{e}_2 e^{i\theta} \\ &=\mathbf{e}_2 e^{i\theta} \\ &=\hat{\boldsymbol{\theta}},\end{aligned}

and

\begin{aligned}\partial_\theta \hat{\boldsymbol{\theta}} &=\partial_\theta \left( \mathbf{e}_2 e^{i\theta} \right) \\ &=\mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2 e^{i\theta} \\ &=-\mathbf{e}_1 e^{i\theta} \\ &=-\hat{\mathbf{r}}.\end{aligned}

(One should be able to get the same results if these unit vectors were written out in full as \hat{\mathbf{r}} = \mathbf{e}_1 \cos\theta + \mathbf{e}_2 \sin\theta, and \hat{\boldsymbol{\theta}} = \mathbf{e}_2 \cos\theta - \mathbf{e}_1 \sin\theta, instead of using the obscure geometric algebra quaterionic rotation exponential operators.)

Having calculated these partials we now have

\begin{aligned}(\boldsymbol{\nabla} \cdot \boldsymbol{\nabla}) =\partial_{rr} +\frac{1}{{r}} \partial_r +\frac{1}{{r^2}} \partial_{\theta\theta} \end{aligned} \quad\quad\quad(25)

Exactly what it should be, and what we got with the coordinate form of the divergence operator when applying the “Laplacian equals the divergence of the gradient” rule blindly. We see that the expectation that \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} is the Laplacian in more than the Cartesian coordinate system is not invalid, but that care is required to apply the chain rule to all functions. We also see that expressing a vector in coordinate form when the basis vectors are position dependent is also a path to danger.

Is this anything that our electricity and magnetism prof didn’t know? Unlikely. Is this something that our prof felt that could not be explained to a mob of first year students? Probably.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | 1 Comment »

Two particle center of mass Laplacian change of variables.

Posted by peeterjoot on November 30, 2009

[Click here for a PDF of this post with nicer formatting]

Exercise 15.2 in [1] is to do a center of mass change of variables for the two particle Hamiltonian

\begin{aligned}H = - \frac{\hbar^2}{2 m_1} {\boldsymbol{\nabla}_1}^2- \frac{\hbar^2}{2 m_2} {\boldsymbol{\nabla}_2}^2+ V(\mathbf{r}_1 -\mathbf{r}_2).\end{aligned} \quad\quad\quad(1)

Before trying this, I was surprised that this would result in a diagonal form for the transformed Hamiltonian, so it is well worth doing the problem to see why this is the case. He uses

\begin{aligned}\boldsymbol{\xi} &= \mathbf{r}_1 - \mathbf{r}_2 \\ \boldsymbol{\eta} &= \frac{1}{{M}}( m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2 ).\end{aligned} \quad\quad\quad(2)

Lets use coordinates {x_k}^{(1)} for \mathbf{r}_1, and {x_k}^{(2)} for \mathbf{r}_2. Expanding the first order partial operator for {\partial {}}/{\partial {{x_1}^{(1)}}} by chain rule in terms of \boldsymbol{\eta}, and \boldsymbol{\xi} coordinates we have

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(1)}}}&=\frac{\partial {\eta_k}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {\eta_k}}+\frac{\partial {\xi_k}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {\xi_k}} \\ &=\frac{m_1}{M} \frac{\partial {}}{\partial {\eta_1}}+ \frac{\partial {}}{\partial {\xi_1}}.\end{aligned}

We also have

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(2)}}}&=\frac{\partial {\eta_k}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {\eta_k}}+\frac{\partial {\xi_k}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {\xi_k}} \\ &=\frac{m_2}{M} \frac{\partial {}}{\partial {\eta_1}}- \frac{\partial {}}{\partial {\xi_1}}.\end{aligned}

The second partials for these x coordinates are not a diagonal quadratic second partial operator, but are instead

\begin{aligned}\frac{\partial {}}{\partial {x_1^{(1)}}} \frac{\partial {}}{\partial {x_1^{(1)}}}&=\frac{(m_1)^2}{M^2} \frac{\partial^2}{\partial \eta_1 \partial \eta_1}{}+\frac{\partial^2}{\partial \xi_1 \partial \xi_1}{}+2 \frac{m_1}{M} \frac{\partial^2}{\partial \xi_1 \partial \eta_1}{} \\ \frac{\partial {}}{\partial {x_1^{(2)}}} \frac{\partial {}}{\partial {x_1^{(2)}}}&=\frac{(m_2)^2}{M^2} \frac{\partial^2}{\partial \eta_1 \partial \eta_1}{}+\frac{\partial^2}{\partial \xi_1 \partial \xi_1}{}-2 \frac{m_2}{M} \frac{\partial^2}{\partial \xi_1 \partial \eta_1}{}.\end{aligned} \quad\quad\quad(4)

The desired result follows directly, since the mixed partial terms conveniently cancel when we sum (1/m_1) {\partial {}}/{\partial {x_1^{(1)}}} {\partial {}}/{\partial {x_1^{(1)}}} +(1/m_2) {\partial {}}/{\partial {x_1^{(2)}}} {\partial {}}/{\partial {x_1^{(2)}}}. This leaves us with

\begin{aligned}H = \frac{-\hbar^2}{2} \sum_{k=1}^3 \left( \frac{1}{{M}} \frac{\partial^2}{\partial \eta_k \partial \eta_k}{}+ \left( \frac{1}{{m_1}} + \frac{1}{{m_2}} \right) \frac{\partial^2}{\partial \xi_k \partial \xi_k}{}\right)+ V(\boldsymbol{\xi}),\end{aligned} \quad\quad\quad(6)

With the shorthand of the text

\begin{aligned}\boldsymbol{\nabla}_{\boldsymbol{\eta}} &= \sum_k \frac{\partial^2}{\partial \eta_k \partial \eta_k}{} \\ \boldsymbol{\nabla}_{\boldsymbol{\xi}} &= \sum_k \frac{\partial^2}{\partial \xi_k \partial \xi_k}{},\end{aligned} \quad\quad\quad(7)

this is the result to be proven.

References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »