# Peeter Joot's Blog.

• ## Archives

 bubba transvere on My letter to the Ontario Energ… Kuba Ober on New faucet installation peeterjoot on Ease of screwing up C string… peeterjoot on Basement electrical now d… peeterjoot on New faucet installation
• ## People not reading this blog: 6,973,738,433 minus:

• 136,382 hits

# Posts Tagged ‘time evolution’

## The continuity equation for phase space density

Posted by peeterjoot on February 5, 2013

Thinking back to the origin of the 3D continuity equation from fluid mechanics, we used the geometrical argument that any change to the mass in a volume had to leave through the boundary. This was

\begin{aligned}\frac{\partial }{\partial t} \int_V \rho dV = -\int_{\partial V} \left( \rho \mathbf{u} \right) \cdot \hat{\mathbf{n}} dA= -\int_{V} \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) dV.\end{aligned} \hspace{\stretch{1}}(1.0.1)

We used Green’s theorem above, allowing us to write, provided the volume is fixed

\begin{aligned}0 =\int_V \left( \frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) \right)dV,\end{aligned} \hspace{\stretch{1}}(1.0.2)

or

\begin{aligned}0 =\frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Consider the following phase space picture (Fig1).  The time evolution of any individual particle (or set of particles that lie in the same element of phase space) is directed in the direction $(\dot{x}, \dot{p})$. So, the phase space density leaving through the surface is in proportion to the normal component of $\mathbf{j} = \rho (\dot{q}, \dot{p})$ (red in the figure).

Fig1: Phase space current

With this geometrical picture in mind, the $6N$ dimensional phase space equivalent of 1.0.1, and a basis $\mathbf{e}_{i_{\alpha}}$ for the positions $q_{i_\alpha}$ and $\mathbf{f}_{i_\alpha}$ for the momentum components $p_{i_\alpha}$ is then

\begin{aligned}\frac{\partial }{\partial t} \int_{V_{6N}} \rho d^{3N} q d^{3N} p &= -\int_{\partial V_{6N}} \rho \sum_{i_\alpha} \left( \mathbf{e}_{i_\alpha} \dot{q}_{i_\alpha} + \mathbf{f}_{i_\alpha} \dot{p}_{i_\alpha} \right) \cdot \hat{\mathbf{n}} dA \\ &= -\int_{V_{6N}} d^{3N} q d^{3N} p \sum_{i_\alpha} \left( \frac{ \partial \left( \rho \dot{q}_{i_\alpha} \right) }{\partial q_{i_\alpha} } + \frac{ \partial \left( \rho \dot{p}_{i_\alpha} \right) }{\partial p_{i_\alpha} } \right)\end{aligned} \hspace{\stretch{1}}(1.0.5)

Here $dA$ is the surface area element for the phase space and $\hat{\mathbf{n}}$ is the unit normal to this surface. We have to assume the existance of a divergence theorem for the $6N$ dimensional space.

We can now regroup, and find for the integrand

\begin{aligned} 0 = \frac{\partial \rho}{\partial t} + \sum_{i_\alpha} \left( \frac{\partial \left( \rho \dot{q}_{i_\alpha} \right) }{ \partial q_{i_\alpha} } + \frac{\partial \left( \rho \dot{p}_{i_\alpha} \right) }{ \partial p_{i_\alpha} } \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

which is the \underlineAndIndex{continuity equation}. The assumptions that we have to make are that the flow of the density in phase space through the surface is proportional to the projection of the vector $\rho (\dot{q}_{i_\alpha}, \dot{p}_{i_\alpha})$, and then use the same old arguments (extended to a $6N$ dimensional space) as we did for the continuity equation for 3D masses.

## Spin down of coffee in a bottomless cup.

Posted by peeterjoot on April 25, 2012

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

Here’s a variation of a problem outlined in section 2 of [1], which looked at the time evolution of fluid with initial rotational motion, after the (cylindrical) rotation driver stops, later describing this as the spin down of a cup of tea. I’ll work the problem in more detail than in the text, and also make two refinements.

• I drink coffee and not tea.
• I stir my coffee in the interior of the cup and not on the outer edge.

Because of the second point I’ll model my stir stick as a rotating cylinder in the cup and not by somebody spinning the cup itself to stir the tea. This only changes the solution for the steady state part of the problem.

# Guts

We’ll work in cylindrical coordinates following the conventions of figure (1).

Figure 1: Fluid flow in nested cylinders.

We’ll assume a solution that with velocity azimuthal in direction, and both pressure and velocity that are only radially dependent.

\begin{aligned}\mathbf{u} = u(r) \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}p = p(r)\end{aligned} \hspace{\stretch{1}}(2.2)

Let’s first verify that this meets the non-compressible condition that eliminates the $\mu \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{u})$ term from Navier-Stokes

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{u}&=\left(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\phi}}}{r} \partial_\phi + \hat{\mathbf{z}} \partial_z\right) \cdot \left(u \hat{\boldsymbol{\phi}}\right) \\ &=\hat{\boldsymbol{\phi}} \cdot\left(\hat{\mathbf{r}} \partial_r u + \frac{\hat{\boldsymbol{\phi}}}{r} \partial_\phi u + \hat{\mathbf{z}} \partial_z u\right) +u\left(\hat{\mathbf{r}} \cdot \partial_r \hat{\boldsymbol{\phi}} + \frac{\hat{\boldsymbol{\phi}}}{r} \cdot \partial_\phi \hat{\boldsymbol{\phi}} + \hat{\mathbf{z}} \cdot \partial_z \hat{\boldsymbol{\phi}}\right) \\ &=\hat{\boldsymbol{\phi}} \cdot \hat{\mathbf{r}} \partial_r u +u\frac{\hat{\boldsymbol{\phi}}}{r} \cdot \left(-\hat{\mathbf{r}}\right) \\ &= 0.\end{aligned}

Good. Now let’s express each of the terms of Navier-Stokes in cylindrical form. Our time dependence is

\begin{aligned}\rho \partial_t u(r, t) \hat{\boldsymbol{\phi}}=\rho \hat{\boldsymbol{\phi}} \partial_t u.\end{aligned} \hspace{\stretch{1}}(2.3)

Our inertial term is

\begin{aligned}\begin{aligned}\rho (\mathbf{u} \cdot \boldsymbol{\nabla}) \mathbf{u}&=\frac{\rho u}{r} \partial_\phi (u \hat{\boldsymbol{\phi}}) \\ &=\frac{\rho u^2}{r} (-\hat{\mathbf{r}}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.4)

Our pressure term is

\begin{aligned}-\boldsymbol{\nabla} p=-\hat{\mathbf{r}} \partial_r p,\end{aligned} \hspace{\stretch{1}}(2.5)

and our Laplacian term is

\begin{aligned}\begin{aligned}\mu \boldsymbol{\nabla}^2 \mathbf{u}&=\mu \left( \frac{1}{{r}} \partial_r ( r \partial_r) + \frac{1}{{r^2}} \partial_{\phi\phi} + \partial_{z z}\right)u(r) \hat{\boldsymbol{\phi}} \\ &=\mu \left( \frac{\hat{\boldsymbol{\phi}}}{r} \partial_r ( r \partial_r u) + \frac{-\hat{\mathbf{r}} u}{r^2} \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.6)

Putting things together, we find that Navier-Stokes takes the form

\begin{aligned}\rho \hat{\boldsymbol{\phi}} \partial_t u+\frac{\rho u^2}{r} (-\hat{\mathbf{r}})=-\hat{\mathbf{r}} \partial_r p+\mu \left( \frac{\hat{\boldsymbol{\phi}}}{r} \partial_r ( r \partial_r u) + \frac{-\hat{\boldsymbol{\phi}} u}{r^2} \right),\end{aligned} \hspace{\stretch{1}}(2.7)

which nicely splits into an separate equations for the $\hat{\boldsymbol{\phi}}$ and $\hat{\mathbf{r}}$ directions respectively

\begin{aligned}\frac{1}{{\nu}} \partial_t u=\frac{1}{r} \partial_r ( r \partial_r u) - \frac{u}{r^2}\end{aligned} \hspace{\stretch{1}}(2.8a)

\begin{aligned}\frac{\rho u^2}{r}=\partial_r p.\end{aligned} \hspace{\stretch{1}}(2.8b)

Before $t = 0$ we seek the steady state, the solution of

\begin{aligned}r \partial_r ( r \partial_r u) - u = 0.\end{aligned} \hspace{\stretch{1}}(2.9)

We’ve seen that

\begin{aligned}u(r) = A r + \frac{B}{r}\end{aligned} \hspace{\stretch{1}}(2.10)

is the general solution, and can now fit this to the boundary value constraints. For the interior portion of the cup we have

\begin{aligned}{\left.{{A r + \frac{B}{r}}}\right\vert}_{{r = 0}} = 0\end{aligned} \hspace{\stretch{1}}(2.11)

so $B = 0$ is required. For the interface of the “stir-stick” (moving fast enough that we can consider it having a cylindrical effect) at $r = R_1$ we have

\begin{aligned}A R_1 = \Omega R_1,\end{aligned} \hspace{\stretch{1}}(2.12)

so the interior portion of our steady state coffee velocity is just

\begin{aligned}\mathbf{u} = \Omega r \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.13)

Between the cup edge and the stir-stick we have to solve

\begin{aligned}A R_1 + \frac{B}{R_1} &= \Omega R_1 \\ A R_2 + \frac{B}{R_2} &= 0,\end{aligned} \hspace{\stretch{1}}(2.14)

or

\begin{aligned}A R_1^2 + B &= \Omega R_1^2 \\ A R_2^2 + B &= 0.\end{aligned} \hspace{\stretch{1}}(2.16)

Subtracting we find

\begin{aligned}A = -\frac{\Omega R_1^2}{R_2^2 - R_1^2}\end{aligned} \hspace{\stretch{1}}(2.18a)

\begin{aligned}B = \frac{\Omega R_1^2 R_2^2}{R_2^2 - R_1^2},\end{aligned} \hspace{\stretch{1}}(2.18b)

so our steady state coffee flow is

\begin{aligned}\mathbf{u} =\left\{\begin{array}{l l}\Omega r \hat{\boldsymbol{\phi}}& \quad \mbox{latex r \in [0, R_1]} \\ \frac{\Omega R_1^2}{R_2^2 – R_1^2} \left( \frac{R_2^2}{r} -r \right)\hat{\boldsymbol{\phi}}& \quad \mbox{$r \in [R_1, R_2]$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.19)

## Time evolution.

We can use a separation of variables technique with $u(r, t) = R(r) T(t)$ to find

\begin{aligned}\frac{1}{{\nu}} \frac{T'}{T} = \frac{1}{{R}} \left( \frac{1}{r} \partial_r ( r \partial_r R) - \frac{R}{r^2}\right)= -\lambda^2,\end{aligned} \hspace{\stretch{1}}(2.20)

which gives us

\begin{aligned}T \propto e^{-\lambda^2 \nu t},\end{aligned} \hspace{\stretch{1}}(2.21)

and $R$ specified by

\begin{aligned}0 = r^2 \frac{d^2 R}{dr^2} + r \frac{d R}{dr} + R \left( r^2 \lambda^2 - 1 \right).\end{aligned} \hspace{\stretch{1}}(2.22)

Checking [2] (9.1.1) we see that this can be put into the standard form of the Bessel equation if we eliminate the $\lambda$ term. We can do that writing $z = r \lambda$, $\mathcal{R}(z) = R(z/\lambda)$ and noting that $r d/dr = z d/dz$ and $r^2 d^2/dr^2 = z^2 d^2/dz^2$, which gives us

\begin{aligned}0 = z^2 \frac{d^2 \mathcal{R}}{dr^2} + z \frac{d \mathcal{R}}{dr} + \mathcal{R} \left( z^2 - 1 \right).\end{aligned} \hspace{\stretch{1}}(2.23)

The solutions are

\begin{aligned}\mathcal{R}(z) = J_{\pm 1}(z), Y_{\pm 1}(z).\end{aligned} \hspace{\stretch{1}}(2.24)

From (9.1.5) of the handbook we see that the plus and minus variations are linearly dependent since $J_{-1}(z) = -J_1(z)$ and $Y_{-1}(z) = -Y_1(z)$, and from (9.1.8) that $Y_1(z)$ is infinite at the origin, so our general solution has to be of the form

\begin{aligned}\mathbf{u}(r, t) = \hat{\boldsymbol{\phi}} \sum_\lambda c_\lambda e^{-\lambda^2 \nu t} J_{1}(r \lambda).\end{aligned} \hspace{\stretch{1}}(2.25)

In the text, I see that the transformation $\lambda \rightarrow \lambda/a$ (where $a$ was the radius of the cup) is made so that the Bessel function parameter was dimensionless. We can do that too but write

\begin{aligned}\mathbf{u}(r, t) = \hat{\boldsymbol{\phi}} \sum_\lambda c_\lambda e^{-\frac{\lambda^2}{R_2^2} \nu t} J_{1}\left(\lambda \frac{r}{R_2}\right).\end{aligned} \hspace{\stretch{1}}(2.26)

Our boundary value constraint is that we require this to match 2.19 at $t = 0$. Let’s write $R_2 = R$, $R_1 = a R$, $z = r/R$, so that we are working in the unit circle with $z \in [0, 1]$. Our boundary problem can now be expressed as

\begin{aligned}\frac{1}{{\Omega R}} \sum_\lambda c_\lambda J_{1}\left(\lambda z\right)=\left\{\begin{array}{l l}z & \quad \mbox{latex z \in [0, a]} \\ \frac{1}{\frac{R^2}{a^2} – 1} \left( \frac{1}{{z}} – z\right)& \quad \mbox{$z \in [a, 1]$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.27)

Let’s pull the $\Omega R$ factor into $c_\lambda$ and state the problem to be solved as

\begin{aligned}\mathbf{u}(r, t) = \Omega R \hat{\boldsymbol{\phi}} \sum_{i=1}^n c_i e^{-\frac{\lambda_i^2}{R^2} \nu t} J_{1}\left(\lambda_i \frac{r}{R}\right)\end{aligned} \hspace{\stretch{1}}(2.28a)

\begin{aligned}\sum_{i = 1}^n c_i J_{1}\left(\lambda_i z\right) = \phi(z)\end{aligned} \hspace{\stretch{1}}(2.28b)

\begin{aligned}\phi(z) = \left\{\begin{array}{l l}z & \quad \mbox{latex z \in [0, a]} \\ \frac{a^2}{1 – a^2} \left( \frac{1}{{z}} – z\right)& \quad \mbox{$z \in [a, 1]$} \\ \end{array}\right..\end{aligned} \hspace{\stretch{1}}(2.28c)

Looking at section 2.7 of [3] it appears the solutions for $c_i$ can be obtained from

\begin{aligned}c_i = \frac{\int_0^1 r\phi(z) J_1(\lambda_i z) dz}{\int_0^1 r J_1^2(\lambda_i z) dz},\end{aligned} \hspace{\stretch{1}}(2.29)

where $\lambda_i$ are the zeros of $J_1$.

To get a feel for these, a plot of the first few of these fitting functions is shown in figure (2).

Figure 2: First four zero crossing Bessel functions.

Using Mathematica in bottomlessCoffee.cdf, these coefficients were calculated for $a = 0.6$. The $n = 1, 3, 5$ approximations to the fitting function are plotted with a comparison to the steady state velocity profile in figure (3).

Figure 3: Bessel function fitting for the steady state velocity profile for n = 1, 3, 5.

As indicated in the text, the spin down is way too slow to match reality (this can be seen visually in the worksheet by animating it).

# References

[1] D.J. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990.

[2] M. Abramowitz and I.A. Stegun. {\em Handbook of mathematical functions with formulas, graphs, and mathematical tables}, volume 55. Dover publications, 1964.

[3] H. Sagan. Boundary and eigenvalue problems in mathematical physics. Dover Pubns, 1989.

## Channel flow with step pressure gradient

Posted by peeterjoot on April 1, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation and statement.

This is problem 2.5 from [1].

Viscous fluid is at rest in a two-dimensional channel between stationary rigid walls with $y = \pm h$. For $t \ge 0$ a constant pressure gradient $P = -dp/dx$ is imposed. Show that $u(y, t)$ satifies

\begin{aligned}\frac{\partial {u}}{\partial {t}} = \nu \frac{\partial^2 {{u}}}{\partial {{y}}^2} + \frac{P}{\rho},\end{aligned} \hspace{\stretch{1}}(1.1)

and give suitable initial and boundary conditions. Find $u(y, t)$ in form of a Fourier series, and show that the flow approximates to steady channel flow when $t \gg h^2/\nu$.

# Solution

With only horizontal components to the flow, the Navier-Stokes equations for incompressible flow are

\begin{subequations}

\begin{aligned}\frac{\partial {u}}{\partial {t}} + u \frac{\partial {u}}{\partial {x}} = -\frac{1}{{\rho}} \frac{\partial {p}}{\partial {x}} + \nu \left( \frac{\partial^2 {{}}}{\partial {{x}}^2}+\frac{\partial^2 {{}}}{\partial {{y}}^2}\right)u\end{aligned} \hspace{\stretch{1}}(2.2a)

\begin{aligned}\frac{\partial {u}}{\partial {x}} = 0\end{aligned} \hspace{\stretch{1}}(2.2b)

\end{subequations}

Substitution of 2.2b into 2.2a gives us

\begin{aligned}\frac{\partial {u}}{\partial {t}} = -\frac{1}{{\rho}} \frac{\partial {p}}{\partial {x}} + \nu \frac{\partial^2 {{u}}}{\partial {{y}}^2}.\end{aligned} \hspace{\stretch{1}}(2.3)

Our equation to solve is therefore

\begin{aligned}\boxed{\frac{\partial {u}}{\partial {t}} = \Theta(t) \frac{P}{\rho} + \nu \frac{\partial^2 {{u}}}{\partial {{y}}^2}.}\end{aligned} \hspace{\stretch{1}}(2.4)

This equation, for $t < 0$, allows for solutions

\begin{aligned}u = A y + B\end{aligned} \hspace{\stretch{1}}(2.5)

but the problem states that the fluid is at rest initially, so we don’t really have to solve anything (i.e. $A = B = 0$).

The no-slip conditions introduce boundary value conditions $u(\pm h, t) = 0$.

For $t \ge 0$ we have

\begin{aligned}\frac{\partial {u}}{\partial {t}} = \frac{P}{\rho} + \nu \frac{\partial^2 {{u}}}{\partial {{y}}^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

If we attempt separation of variables with $u(y, t) = Y(y) T(t)$, our equation takes the form

\begin{aligned}T' Y = \frac{P}{\rho} + \nu T Y''.\end{aligned} \hspace{\stretch{1}}(2.7)

We see that the non-homogenous term prevents successful application of separation of variables. Let’s modify our problem by attempting to recast our equation into a homogenous form by adding a particular solution for the steady state flow problem. That problem was the solution of

\begin{aligned}\frac{\partial^2 {{}}}{\partial {{y}}^2} u_s(y, 0) = -\frac{P}{\rho \nu}\end{aligned} \hspace{\stretch{1}}(2.8)

which has solution

\begin{aligned}u_s(y, 0) = \frac{P}{2 \rho \nu} \left( h^2 - y^2 \right) + A y + B.\end{aligned} \hspace{\stretch{1}}(2.9)

The freedom to incoporate an $h^2$ constant into the equation as an integration constant has been employed, knowing that it will kill the $y^2$ contributions at $y = \pm h$ to make the boundary condition matching easier. Our no-slip conditions give us

\begin{aligned}0 &= A h + B \\ 0 &= -A h + B.\end{aligned} \hspace{\stretch{1}}(2.10)

Adding this we have $2 B = 0$, and subtracting gives us $2 A h = 0$, so a specific solution that matches our required boundary value (and initial value) conditions is just the steady state channel flow solution we are familiar with

\begin{aligned}u_s(y, 0) = \frac{P}{2 \rho \nu} \left( h^2 - y^2 \right).\end{aligned} \hspace{\stretch{1}}(2.12)

Let’s now assume that our general solution has the form

\begin{aligned}u(y, t) = u_H(y, t) + u_s(y, 0).\end{aligned} \hspace{\stretch{1}}(2.13)

Applying the Navier-Stokes equation to this gives us

\begin{aligned}\frac{\partial {u_H}}{\partial {t}} = \frac{P}{\rho} + \nu \frac{\partial^2 {{u_H}}}{\partial {{y}}^2} + \nu \frac{\partial^2 {{u_s}}}{\partial {{y}}^2}.\end{aligned} \hspace{\stretch{1}}(2.14)

But from 2.8, we see that all we have left is a homogenous problem in $u_H$

\begin{aligned}\frac{\partial {u_H}}{\partial {t}} = \nu \frac{\partial^2 {{u_H}}}{\partial {{y}}^2},\end{aligned} \hspace{\stretch{1}}(2.15)

where our boundary value conditions are now given by

\begin{aligned}0 &= u_H(\pm h, t) + u_s(\pm h) \\ &= u_H(\pm h, t),\end{aligned}

and

\begin{aligned}0 &= u(y, 0) \\ &= u_H(y, 0) + \frac{P}{2 \rho \nu} \left( h^2 - y^2 \right),\end{aligned}

or
\begin{subequations}

\begin{aligned}u_H(\pm h, t) = 0\end{aligned} \hspace{\stretch{1}}(2.16a)

\begin{aligned}u_H(y, 0) = -\frac{P}{2 \rho \nu} \left( h^2 - y^2 \right).\end{aligned} \hspace{\stretch{1}}(2.16b)

\end{subequations}

Now we can apply separation of variables with $u_H = T(t) Y(y)$, yielding

\begin{aligned}T' Y = \nu T Y'',\end{aligned} \hspace{\stretch{1}}(2.17)

or

\begin{aligned}\frac{T'}{T} = \nu \frac{Y''}{Y} = \text{constant} = - \nu \alpha^2.\end{aligned} \hspace{\stretch{1}}(2.18)

Here a positive constant $\nu \alpha^2$ has been used assuming that we want a solution that is damped with time.

Our solutions are

\begin{aligned}T &\propto e^{- \nu \alpha^2 t} \\ Y &= A \sin \alpha y + B \cos\alpha y,\end{aligned} \hspace{\stretch{1}}(2.19)

or

\begin{aligned}u_H(y, t) = \sum_\alpha e^{-\alpha^2 \nu t} \left( A_\alpha \sin \alpha y + B_\alpha \cos\alpha y \right).\end{aligned} \hspace{\stretch{1}}(2.21)

We have constraints on $\alpha$ due to our boundary value conditions. For our sin terms to be solutions we require

\begin{aligned}\sin (\alpha (\pm h)) = \sin n \pi\end{aligned} \hspace{\stretch{1}}(2.22)

and for our cosine terms to be solutions we require

\begin{aligned}\cos (\alpha (\pm h)) = \cos \left( \frac{\pi}{2} + n \pi \right)\end{aligned} \hspace{\stretch{1}}(2.23)

\begin{aligned}\alpha &= \frac{2 n \pi}{2 h} \\ \alpha &= \frac{2 n + 1 \pi}{2 h}\end{aligned} \hspace{\stretch{1}}(2.24)

respectively.

Our homogeneous solution therefore takes the form

\begin{aligned}u_H(y, t) = C_0 + \sum_{m > 0} C_m e^{ -(m \pi/2h)^2 \nu t } \left\{\begin{array}{l l}\sin \left( \frac{ m \pi y }{2 h} \right) & \quad \mbox{latex meven} \\ \cos \left( \frac{ m \pi y }{2 h} \right) & \quad \mbox{$m$ odd} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.26)

Our undetermined constants should be provided by the boundary value constraint at $t = 0$ 2.16b, leaving us to solve the Fourier problem

\begin{aligned}-\frac{P}{2 \mu} \left( h^2 - y^2 \right)=\sum_{m \ge 0} C_m \left\{\begin{array}{l l}\sin \left( \frac{ m \pi y }{2 h} \right) & \quad \mbox{latex meven} \\ \cos \left( \frac{ m \pi y }{2 h} \right) & \quad \mbox{$m$ odd} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.27)

Multiplying by a sine and integrating will clearly give zero (even times odd function over a symmetric interval). Let’s see if there’s any scaling required to select out the $C_m$ term

\begin{aligned}\int_{-h}^h \cos \left( \frac{ m \pi y }{2 h} \right) \cos \left( \frac{ n \pi y }{2 h} \right) dy&=\frac{2h}{\pi} \int_{-h}^h \cos \left( \frac{ m \pi y }{2 h} \right) \cos \left( \frac{ n \pi y }{2 h} \right) \pi dy/2h \\ &=\frac{2h}{\pi} \int_{-\pi/2}^{\pi/2}\cos m x \cos n x dx \\ &=\frac{h}{\pi} \int_{-\pi/2}^{\pi/2}\left( \cos( (m - n) \pi/2 ) +\cos( (m + n) \pi/2 ) \right) dx\end{aligned}

Note that since $m$ and $n$ must be odd, $m \pm n = 2 c$ for some integer $c$, so this integral is zero unless $m = n$ (consider $m = 2 a + 1, n = 2 b + 1$). For the $m = n$ term we have

\begin{aligned}\int_{-h}^h \cos \left( \frac{ m \pi y }{2 h} \right) \cos \left( \frac{ n \pi y }{2 h} \right) dy&=\frac{h}{\pi} \int_{-\pi/2}^{\pi/2}\left( 1+\cos( m \pi ) \right) dx \\ &=h\end{aligned}

Therefore, our constants $C_m$ (for odd $m$) are given by

\begin{aligned}C_m &= -\frac{P h }{2 \mu} \int_{-h}^h\left( 1 - \left( \frac{y}{h}\right)^2 \right)\cos \left( \frac{ m \pi y }{2 h} \right) dy \\ &=-\frac{P h^2 }{2 \mu} \int_{-1}^1\left( 1 - x^2 \right)\cos \left( \frac{ m \pi x }{2} \right) x \\ \end{aligned}

With $m = 2 n + 1$, we have

\begin{aligned}C_{2 n + 1} = -\frac{16 P h^2 (-1)^n}{\mu \pi^3 (2 n + 1)^3}.\end{aligned} \hspace{\stretch{1}}(2.28)

Our complete solution is

\begin{aligned}u(y, t) =\frac{P h^2}{2 \mu} \left( 1 - \left( \frac{y}{h} \right)^2 \right)- \frac{16 P h^2 }{\mu \pi^3 }\sum_{n = 0}^\infty \frac{(-1)^n}{(2 n + 1)^3}e^{ -((2 n + 1) \pi/2h)^2 \nu t } \cos \left( \frac{ (2 n + 1) \pi y }{2 h} \right) .\end{aligned} \hspace{\stretch{1}}(2.29)

The largest of the damped exponentials above is the $n = 0$ term which is

\begin{aligned}e^{ - \pi^2 \nu t /h^2 },\end{aligned} \hspace{\stretch{1}}(2.30)

so if $\nu t >> h^2$ these terms all die off, leaving us with just the steady state.

Rather remarkably, this Fourier series is actually a very good fit even after only a single term. Using the viscosity and density of water, $h = 1 \text{cm}$, and $P = 3 \times \mu_{\text{water}} \times (2 \text{cm}/{s})/ h^2$ (parameterizing the pressure gradient by the average velocity it will induce), a plot of the parabola that we are fitting to and the difference of that from the first Fourier term is shown in figure (1).

Figure 1: Parabolic channel flow steady state, and difference from first Fourier term.

The higher order corrections are even smaller. Even the first order deviations from the parabola that we are fitting to is a correction on the scale of $1/100$ of the height of the parabola. This is illustrated in figure (2) where the magnitude of the first 5 deviations from the steady state are plotted.

Figure 2: Difference from the steady state for the first five Fourier terms.

An animation of the time evolution above can be found in figure (3). If this animation is unavailable, it can also be found at http://youtu.be/0vZuv9HBtmo.

Figure 3: Time evolution of channel flow velocity profile after turning on a constant pressure gradient.

It’s also interesting to look at the very earliest part of the time evolution. Observe in figure (4)  (or http://youtu.be/dDkx8iLwOew) the oscillatory phenomina. Could some of that be due to not running with enough Fourier terms in this early part of the evolution when more terms are probably significant?

Figure 4: Early time evolution of channel flow velocity profile after turning on a constant pressure gradient.

# References

[1] D.J. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990.

## PHY354H1S. Advanced Classical Mechanics (Taught by Prof. Erich Poppitz). Phase Space and Trajectories.

Posted by peeterjoot on March 5, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Phase space and phase trajectories.

The phase space and phase trajectories are the space of $p$‘s and $q$‘s of a mechanical system (always even dimensional, with as many $p$‘s as $q$‘s for N particles in 3d: 6N dimensional space).

The state of a mechanical system $\equiv$ the point in phase space.
Time evolution $\equiv$ a curve in phase space.

Example: 1 dim system, say a harmonic oscillator.

\begin{aligned}H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 q^2\end{aligned} \hspace{\stretch{1}}(1.1)

Our phase space can be illustrated as an ellipse as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig1}
\caption{Harmonic oscillator phase space trajectory.}
\end{figure}

where the phase space trajectories of the SHO. The equation describing the ellipse is

\begin{aligned}E = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 q^2,\end{aligned} \hspace{\stretch{1}}(1.2)

which we can put into standard elliptical form as

\begin{aligned}1 = \left( \frac{p}{\sqrt{2 m E}}\right)^2 + \left(\sqrt{\frac{m}{2 E}} \omega\right) q^2\end{aligned} \hspace{\stretch{1}}(1.3)

## Applications of $H$.

\begin{itemize}
\item Classical stat mech.
\item transition into QM via Poisson brackets.
\item mathematical theorems about phase space “flow”.
\item perturbation theory.
\end{itemize}

## Poisson brackets.

Poisson brackets arises very naturally if one asks about the time evolution of a function $f(p, q, t)$ on phase space.

\begin{aligned}\frac{d{{}}}{dt} f(p_i, q_i, t)&=\sum_i \frac{\partial {f}}{\partial {p_i}} \frac{\partial {p_i}}{\partial {t}}+ \frac{\partial {f}}{\partial {q_i}} \frac{\partial {q_i}}{\partial {t}}+ \frac{\partial {f}}{\partial {t}} \\ &=\sum_i- \frac{\partial {f}}{\partial {p_i}} \frac{\partial {H}}{\partial {q_i}}+ \frac{\partial {f}}{\partial {q_i}} \frac{\partial {H}}{\partial {p_i}}+ \frac{\partial {f}}{\partial {t}}\end{aligned}

Define the commutator of $H$ and $f$ as

\begin{aligned}\left[{H},{f}\right] =\sum_i\frac{\partial {H}}{\partial {p_i}}\frac{\partial {f}}{\partial {q_i}}-\frac{\partial {H}}{\partial {q_i}}\frac{\partial {f}}{\partial {p_i}}\end{aligned} \hspace{\stretch{1}}(1.4)

This is the Poisson bracket of $H(p,q,t)$ with $f(p,q,t)$, defined for arbitrary functions on phase space.

Note that other conventions for sign exist (apparently in Landau and Lifshitz uses the opposite).

So we have

\begin{aligned}\frac{d{{}}}{dt} f(p_i, q_i, t) = \left[{H},{f}\right] + \frac{\partial {f}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.5)

Corollaries:

If $f$ has no explicit time dependence ${\partial {f}}/{\partial {t}} = 0$ and if $\left[{H},{f}\right] = 0$, then $f$ is an integral of motion.

In QM conserved quantities are the ones that commute with the Hamiltonian operator.

To see the analogy better, recall def of Poisson bracket

\begin{aligned}\left[{f},{g}\right] =\sum_i\frac{\partial {f}}{\partial {p_i}}\frac{\partial {g}}{\partial {q_i}}-\frac{\partial {f}}{\partial {q_i}}\frac{\partial {g}}{\partial {p_i}}\end{aligned} \hspace{\stretch{1}}(1.6)

Properties of Poisson bracket

\begin{itemize}
\item antisymmetric

\begin{aligned}\left[{f},{g}\right] = -\left[{g},{f}\right].\end{aligned} \hspace{\stretch{1}}(1.7)

\item linear

\begin{aligned}\left[{a f + b h},{g}\right] &= a \left[{f},{g}\right] + b\left[{h},{g}\right] \\ \left[{g},{a f + b h}\right] &= a \left[{g},{f}\right] + b\left[{g},{h}\right].\end{aligned} \hspace{\stretch{1}}(1.8)

\end{itemize}

### Example. Compute $p$, $q$ commutators.

\begin{aligned}\left[{p_i},{p_j}\right] &= \sum_k\frac{\partial {p_i}}{\partial {p_k}}\not{{\frac{\partial {p_j}}{\partial {q_k}}}}-\not{{\frac{\partial {p_i}}{\partial {q_k}}}}\frac{\partial {p_j}}{\partial {p_k}} \\ &= 0\end{aligned}

So

\begin{aligned}\left[{p_i},{p_j}\right] = 0\end{aligned} \hspace{\stretch{1}}(1.10)

Similarly $\left[{q_i},{q_j}\right] = 0$.

\begin{aligned}\left[{q_i},{p_j}\right] &= \sum_k\not{{\frac{\partial {q_i}}{\partial {p_k}}}}\not{{\frac{\partial {p_j}}{\partial {q_k}}}}-\frac{\partial {q_i}}{\partial {q_k}}\frac{\partial {p_j}}{\partial {p_k}} \\ &= -\sum_k\delta_{ik}\delta_{jk} \\ &=-\delta_{ij}\end{aligned}

So

\begin{aligned}\left[{q_i},{p_j}\right] = -\delta_{ij}.\end{aligned} \hspace{\stretch{1}}(1.11)

This provides a systematic (axiomatic) way to “quantize” a classical mechanics system, where we make replacements

\begin{aligned}q_i &\rightarrow \hat{q}_i \\ p_i &\rightarrow \hat{p}_i,\end{aligned} \hspace{\stretch{1}}(1.12)

and

\begin{aligned}\left[{q_i},{p_j}\right] = -\delta_{ij} &\rightarrow \left[{q_i},{p_j}\right] = i \hbar \delta_{ij} \\ H(p, q, t) &\rightarrow \hat{H}(\hat{p}, \hat{q}, t).\end{aligned} \hspace{\stretch{1}}(1.14)

So

\begin{aligned}\frac{\left[{\hat{q}_i},{\hat{p}_j}\right]}{-i \hbar } = - \delta_{ij}\end{aligned} \hspace{\stretch{1}}(1.16)

Our quantization of time evolution is therefore

\begin{aligned}\frac{d{{}}}{dt} \hat{q}_i &= \frac{1}{{-i\hbar}} \left[{\hat{H}},{\hat{q}_i}\right] \\ \frac{d{{}}}{dt} \hat{p}_i &= \frac{1}{{-i\hbar}} \left[{\hat{H}},{\hat{p}_i}\right].\end{aligned} \hspace{\stretch{1}}(1.17)

These are the Heisenberg equations of motion in QM.

### Conserved quantities.

For conserved quantities $f$, functions of $p$‘s $q$‘s, we have

\begin{aligned}\left[{f},{H}\right] = 0\end{aligned} \hspace{\stretch{1}}(1.19)

Considering the components $M_i$, where

\begin{aligned}\mathbf{M} = \mathbf{r} \times \mathbf{p},\end{aligned} \hspace{\stretch{1}}(1.20)

We can show (3.27) that our Poisson brackets obey

\begin{aligned}\left[{M_x},{M_y}\right] &= -M_z \\ \left[{M_y},{M_z}\right] &= -M_x \\ \left[{M_z},{M_x}\right] &= -M_y\end{aligned} \hspace{\stretch{1}}(1.21)

(Prof Poppitz wasn’t sure of the sign of this and the particular bracket sign convention he happened to be using, but it appears he had it right).

These are the analogue of the momentum commutator relationships from QM right here in classical mechanics.

Considering the symmetries that lead to this conservation relationship, it is actually possible to show that rotations in 4D space lead to these symmetries and the conservation of the Runge-Lenz vector.

# Adiabatic changes in phase space and conserved quantities.

In figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig2}) where we have

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig2}
\caption{Variable length pendulum.}
\end{figure}

\begin{aligned}T = \frac{2 \pi}{\omega(t)} = \sqrt{\frac{l(t)}{g}}.\end{aligned} \hspace{\stretch{1}}(2.24)

Imagine that we change the length $l(t)$ very slowly so that

\begin{aligned}T \frac{1}{{l}} \frac{d{{l}}}{dt} \ll 1\end{aligned} \hspace{\stretch{1}}(2.25)

where $T$ is the period of oscillation. This is what’s called an adiabatic change, where the change of $\omega$ is small over a period.

It turns out that if this rate of change is slow, then there is actually an invariant, and

\begin{aligned}\frac{E}{\omega},\end{aligned} \hspace{\stretch{1}}(2.26)

is the so-called “adiabatic invariant”. There’s an important application to this (and some relations to QM). Imagine that we have a particle bounded by two walls, where the walls are moved very slowly as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig3}
\caption{Particle constrained by slowly moving walls.}
\end{figure}

This can be used to derive the adiabatic equation for an ideal gas (also using the equipartition theorem).

# Appendix I. Poisson brackets of angular momentum.

Let’s verify the angular momentum relations of 1.21 above (summation over $k$ implied):

\begin{aligned}\left[{M_i},{M_j}\right] &=\frac{\partial {M_i}}{\partial {p_k}}\frac{\partial {M_j}}{\partial {x_k}}-\frac{\partial {M_i}}{\partial {x_k}}\frac{\partial {M_j}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}\frac{\partial {x_a p_b}}{\partial {p_k}}\frac{\partial {x_r p_s}}{\partial {x_k}}-\epsilon_{a b i} \epsilon_{r s j}\frac{\partial {x_a p_b}}{\partial {x_k}}\frac{\partial {x_r p_s}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a \frac{\partial {p_b}}{\partial {p_k}}p_s \frac{\partial {x_r}}{\partial {x_k}}-\epsilon_{a b i} \epsilon_{r s j}p_b \frac{\partial {x_a}}{\partial {x_k}}x_r \frac{\partial {p_s}}{\partial {p_k}} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a \delta_{k b}p_s \delta_{k r}-\epsilon_{a b i} \epsilon_{r s j}p_b \delta_{k a}x_r \delta_{s k} \\ &=\epsilon_{a b i}\epsilon_{r s j}x_a p_s \delta_{b r}-\epsilon_{a b i} \epsilon_{r s j}p_b x_r \delta_{a s} \\ &=\epsilon_{a r i}\epsilon_{r s j}x_a p_s -\epsilon_{s b i} \epsilon_{r s j}p_b x_r \\ &=-\delta_{a i}^{[s j]}x_a p_s -\delta_{b i}^{[j r]}p_b x_r \\ &=-\left(\delta_{a s}\delta_{i j}-\delta_{a j}\delta_{i s}\right)x_a p_s -\left(\delta_{b j}\delta_{i r}-\delta_{b r}\delta_{i j}\right)p_b x_r \\ &=-\delta_{a s}\delta_{i j}x_a p_s +\delta_{a j}\delta_{i s}x_a p_s -\delta_{b j}\delta_{i r}p_b x_r +\delta_{b r}\delta_{i j}p_b x_r \\ &=-\not{{x_s p_s \delta_{i j}}}+x_j p_i -p_j x_i +\not{{p_b x_b \delta_{i j}}}\\ \end{aligned}

So, as claimed, if $i \ne j \ne k$ we have

\begin{aligned}\left[{M_i},{M_j}\right] = -M_k.\end{aligned} \hspace{\stretch{1}}(3.27)

# Appendix II. EOM for the variable length pendulum.

Since we’ve referred to a variable length pendulum above, let’s recall what form the EOM for this system take. With cylindrical coordinates as in figure (\ref{fig:phaseSpaceAndTrajectories:phaseSpaceAndTrajectoriesFig4}), and a spring constant $\omega_0^2 = k/m$ our Lagrangian is

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{phaseSpaceAndTrajectoriesFig4}
\caption{phaseSpaceAndTrajectoriesFig4}
\end{figure}

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) - \frac{1}{{2}} m \omega_0^2 r^2 - m g r( 1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.28)

The EOM follows immediately

\begin{aligned}P_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m r^2 \dot{\theta} \\ P_r &= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}}} = m \dot{r} \\ \frac{d{{P_\theta}}}{dt} &= \frac{\partial {\mathcal{L}}}{\partial {\theta}} = -m g r \sin\theta \\ \frac{d{{P_r}}}{dt} &= \frac{\partial {\mathcal{L}}}{\partial {r}} = m r \dot{\theta}^2 - m \omega_0^2 r - m g (1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.29)

Or

\begin{aligned}\frac{d{{}}}{dt} \left( r^2 \dot{\theta} \right) &= - g r \sin\theta \\ \frac{d{{}}}{dt} \left( \dot{r} \right) &= r \left( \dot{\theta}^2 - \omega_0^2 \right) - g (1 - \cos\theta)\end{aligned} \hspace{\stretch{1}}(4.33)

Even in the small angle limit this isn’t a terribly friendly looking system

\begin{aligned}r \dot{d}{\theta} + 2 \dot{\theta} \dot{r} + g \theta &= 0 \\ \dot{d}{r} - r \dot{\theta}^2 + r \omega_0^2 &= 0.\end{aligned} \hspace{\stretch{1}}(4.35)

However, in the first equation of this system

\begin{aligned}\dot{d}{\theta} + 2 \dot{\theta} \frac{\dot{r}}{r} + \frac{1}{{r}} g \theta = 0,\end{aligned} \hspace{\stretch{1}}(4.37)

we do see the $\dot{r}/r$ dependence mentioned in class, and see how this being small will still result in something that approximately has the form of a SHO.

## A different derivation of the adiabatic perturbation coefficient equation

Posted by peeterjoot on October 27, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

Professor Sipe’s adiabatic perturbation and that of the text [1] in section 17.5.1 and section 17.5.2 use different notation for $\gamma_m$ and take a slightly different approach. We can find Prof Sipe’s final result with a bit less work, if a hybrid of the two methods is used.

# Guts

Our starting point is the same, we have a time dependent slowly varying Hamiltonian

\begin{aligned}H = H(t),\end{aligned} \hspace{\stretch{1}}(2.1)

where our perturbation starts at some specific time from a given initial state

\begin{aligned}H(t) = H_0, \qquad t \le 0.\end{aligned} \hspace{\stretch{1}}(2.2)

We assume that instantaneous eigenkets can be found, satisfying

\begin{aligned}H(t) {\left\lvert {n(t)} \right\rangle} = E_n(t) {\left\lvert {n(t)} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

Here I’ll use ${\left\lvert {n} \right\rangle} \equiv {\left\lvert {n(t)} \right\rangle}$ instead of the ${\left\lvert {\hat{\psi}_n(t)} \right\rangle}$ that we used in class because its easier to write.

Now suppose that we have some arbitrary state, expressed in terms of the instantaneous basis kets ${\left\lvert {n} \right\rangle}$

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_n \bar{b}_n(t) e^{-i\alpha_n + i \gamma_n} {\left\lvert {n} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}\alpha_n(t) = \frac{1}{{\hbar}} \int_0^t dt' E_n(t'),\end{aligned} \hspace{\stretch{1}}(2.5)

and $\gamma_n$ (using the notation in the text, not in class) is to be determined.

For this state, we have at the time just before the perturbation

\begin{aligned}{\left\lvert {\psi(0)} \right\rangle} = \sum_n \bar{b}_n(0) e^{-i\alpha_n(0) + i \gamma_n(0)} {\left\lvert {n(0)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.6)

The question to answer is: How does this particular state evolve?

Another question, for those that don’t like sneaky bastard derivations, is where did that magic factor of $e^{-i\alpha_n}$ come from in our superposition state? We will see after we start taking derivatives that this is what we need to cancel the $H(t){\left\lvert {n} \right\rangle}$ in Schr\”{o}dinger’s equation.

Proceeding to plug into the evolution identity we have

\begin{aligned}0 &={\left\langle {m} \right\rvert} \left( i \hbar \frac{d{{}}}{dt} - H(t) \right) {\left\lvert {\psi} \right\rangle} \\ &={\left\langle {m} \right\rvert} \left(\sum_n e^{-i \alpha_n + i \gamma_n}(i \hbar) \left(\frac{d{{\bar{b}_n}}}{dt}+ \bar{b}_n \left(-i \not{{\frac{E_n}{\hbar}}} + i \dot{\gamma}_m \right)\right) {\left\lvert {n} \right\rangle}+ i \hbar \bar{b}_n \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}- \not{{E_n \bar{b}_n {\left\lvert {n} \right\rangle}}} \right)\\ &=e^{-i \alpha_m + i \gamma_m}(i \hbar) \frac{d{{\bar{b}_m}}}{dt}+e^{-i \alpha_m + i \gamma_m}(i \hbar) i \dot{\gamma}_m \bar{b}_m+ i \hbar \sum_n \bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}e^{-i \alpha_n + i \gamma_n} \\ &\sim\frac{d{{\bar{b}_m}}}{dt}+i \dot{\gamma}_m \bar{b}_m+ \sum_n e^{-i \alpha_n + i \gamma_n}e^{i \alpha_m - i \gamma_m}\bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle} \\ &=\frac{d{{\bar{b}_m}}}{dt}+i \dot{\gamma}_m \bar{b}_m+ \bar{b}_m {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle}+\sum_{n \ne m} e^{-i \alpha_n + i \gamma_n}e^{i \alpha_m - i \gamma_m}\bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}\end{aligned}

We are free to pick $\gamma_m$ to kill the second and third terms

\begin{aligned}0 =i \dot{\gamma}_m \bar{b}_m+ \bar{b}_m {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.7)

or

\begin{aligned}\dot{\gamma}_m = i {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.8)

which after integration is

\begin{aligned}\gamma_m(t)= i \int_0^t dt' {\left\langle {m(t')} \right\rvert} \frac{d}{dt'} {\left\lvert {m(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.9)

as in class we can observe that this is a purely real function. We are left with

\begin{aligned}\frac{d{{\bar{b}_m}}}{dt}=-\sum_{n \ne m} \bar{b}_n e^{-i \alpha_{nm} + i \gamma_{nm}}{\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle} ,\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}\alpha_{nm} &= \alpha_{n} -\alpha_m \\ \gamma_{nm} &= \gamma_{n} -\gamma_m \end{aligned} \hspace{\stretch{1}}(2.11)

The task is now to find solutions for these $\bar{b}_m$ coefficients, and we can refer to the class notes for that without change.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## My submission for PHY356 (Quantum Mechanics I) Problem Set 3.

Posted by peeterjoot on November 30, 2010

# Problem 1.

## Statement

A particle of mass $m$ is free to move along the x-direction such that $V(X)=0$. The state of the system is represented by the wavefunction Eq. (4.74)

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k x} e^{- i \omega t} f(k)\end{aligned} \hspace{\stretch{1}}(1.1)

with $f(k)$ given by Eq. (4.59).

\begin{aligned}f(k) &= N e^{-\alpha k^2}\end{aligned} \hspace{\stretch{1}}(1.2)

Note that I’ve inserted a $1/\sqrt{2\pi}$ factor above that isn’t in the text, because otherwise $\psi(x,t)$ will not be unit normalized (assuming $f(k)$ is normalized in wavenumber space).

\begin{itemize}
\item
(a) What is the group velocity associated with this state?
\item
(b) What is the probability for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?
\item
(c) What is the probability per unit length for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?
\item
(d) Explain the physical meaning of the above results.
\end{itemize}

## Solution

### (a). group velocity.

To calculate the group velocity we need to know the dependence of $\omega$ on $k$.

Let’s step back and consider the time evolution action on $\psi(x,0)$. For the free particle case we have

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} = -\frac{\hbar^2}{2m} \partial_{xx}.\end{aligned} \hspace{\stretch{1}}(1.3)

Writing $N' = N/\sqrt{2\pi}$ we have

\begin{aligned}-\frac{i t}{\hbar} H \psi(x,0) &= \frac{i t \hbar }{2m} N' \int_{-\infty}^\infty dk (i k)^2 e^{i k x - \alpha k^2} \\ &= N' \int_{-\infty}^\infty dk \frac{-i t \hbar k^2}{2m} e^{i k x - \alpha k^2}\end{aligned}

Each successive application of $-iHt/\hbar$ will introduce another power of $-it\hbar k^2/2 m$, so once we sum all the terms of the exponential series $U(t) = e^{-iHt/\hbar}$ we have

\begin{aligned}\psi(x,t) =N' \int_{-\infty}^\infty dk \exp\left( \frac{-i t \hbar k^2}{2m} + i k x - \alpha k^2 \right).\end{aligned} \hspace{\stretch{1}}(1.4)

Comparing with 1.1 we find

\begin{aligned}\omega(k) = \frac{\hbar k^2}{2m}.\end{aligned} \hspace{\stretch{1}}(1.5)

This completes this section of the problem since we are now able to calculate the group velocity

\begin{aligned}v_g = \frac{\partial {\omega(k)}}{\partial {k}} = \frac{\hbar k}{m}.\end{aligned} \hspace{\stretch{1}}(1.6)

## (b). What is the probability for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?

In order to evaluate the probability, it looks desirable to evaluate the wave function integral 1.4.
Writing $2 \beta = i/(\alpha + i t \hbar/2m )$, the exponent of that integral is

\begin{aligned}-k^2 \left( \alpha + \frac{i t \hbar }{2m} \right) + i k x&=-\left( \alpha + \frac{i t \hbar }{2m} \right) \left( k^2 - \frac{i k x }{\alpha + \frac{i t \hbar }{2m} } \right) \\ &=-\frac{i}{2\beta} \left( (k - x \beta )^2 - x^2 \beta^2 \right)\end{aligned}

The $x^2$ portion of the exponential

\begin{aligned}\frac{i x^2 \beta^2}{2\beta} = \frac{i x^2 \beta}{2} = - \frac{x^2 }{4 (\alpha + i t \hbar /2m)}\end{aligned}

then comes out of the integral. We can also make a change of variables $q = k - x \beta$ to evaluate the remainder of the Gaussian and are left with

\begin{aligned}\psi(x,t) =N' \sqrt{ \frac{\pi}{\alpha + i t \hbar/2m} } \exp\left( - \frac{x^2 }{4 (\alpha + i t \hbar /2m)} \right).\end{aligned} \hspace{\stretch{1}}(1.7)

Observe that from 1.2 we can compute $N = (2 \alpha/\pi)^{1/4}$, which could be substituted back into 1.7 if desired.

Our probability density is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 &=\frac{1}{{2 \pi}} N^2 {\left\lvert{ \frac{\pi}{\alpha + i t \hbar/2m} }\right\rvert} \exp\left( - \frac{x^2}{4} \left( \frac{1}{{(\alpha + i t \hbar /2m)}} + \frac{1}{{(\alpha - i t \hbar /2m)}} \right) \right) \\ &=\frac{1}{{2 \pi}} \sqrt{\frac{2 \alpha}{\pi} } \frac{\pi}{\sqrt{\alpha^2 + (t \hbar/2m)^2 }} \exp\left( - \frac{x^2}{4} \frac{1}{{\alpha^2 + (t \hbar/2m)^2 }} \left( \alpha - i t \hbar /2m + \alpha + i t \hbar /2m \right)\right) \\ &=\end{aligned}

With a final regrouping of terms, this is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 =\sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right).\end{aligned} \hspace{\stretch{1}}(1.8)

As a sanity check we observe that this integrates to unity for all $t$ as desired. The probability that we find the particle at position $x > x_0$ is then

\begin{aligned}P_{x>x_0}(t) = \sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\int_{x=x_0}^\infty dx \exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.9)

The only simplification we can make is to rewrite this in terms of the complementary error function

\begin{aligned}\text{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty e^{-t^2} dt.\end{aligned} \hspace{\stretch{1}}(1.10)

Writing

\begin{aligned}\beta(t) = \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 },\end{aligned} \hspace{\stretch{1}}(1.11)

we have

\begin{aligned}P_{x>x_0}(t_0) = \frac{1}{{2}} \text{erfc} \left( \sqrt{\beta(t_0)/2} x_0 \right)\end{aligned} \hspace{\stretch{1}}(1.12)

Sanity checking this result, we note that since $\text{erfc}(0) = 1$ the probability for finding the particle in the $x>0$ range is $1/2$ as expected.

## (c). What is the probability per unit length for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?

This unit length probability is thus

\begin{aligned}P_{x>x_0+1/2}(t_0) - P_{x>x_0-1/2}(t_0) &=\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0+\frac{1}{{2}} \right) \right) -\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0-\frac{1}{{2}} \right) \right) \end{aligned} \hspace{\stretch{1}}(1.13)

## (d). Explain the physical meaning of the above results.

To get an idea what the group velocity means, observe that we can write our wavefunction 1.1 as

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k (x - v_g t)} f(k)\end{aligned} \hspace{\stretch{1}}(1.14)

We see that the phase coefficient of the Gaussian $f(k)$ “moves” at the rate of the group velocity $v_g$. Also recall that in the text it is noted that the time dependent term 1.11 can be expressed in terms of position and momentum uncertainties $(\Delta x)^2$, and $(\Delta p)^2 = \hbar^2 (\Delta k)^2$. That is

\begin{aligned}\frac{1}{{\beta(t)}} = (\Delta x)^2 + \frac{(\Delta p)^2}{m^2} t^2 \equiv (\Delta x(t))^2\end{aligned} \hspace{\stretch{1}}(1.15)

This makes it evident that the probability density flattens and spreads over time with the rate equal to the uncertainty of the group velocity $\Delta p/m = \Delta v_g$ (since $v_g = \hbar k/m$). It is interesting that something as simple as this phase change results in a physically measurable phenomena. We see that a direct result of this linear with time phase change, we are less able to find the particle localized around it’s original time $x = 0$ position as more time elapses.

# Problem 2.

## Statement

A particle with intrinsic angular momentum or spin $s=1/2$ is prepared in the spin-up with respect to the z-direction state ${\lvert {f} \rangle}={\lvert {z+} \rangle}$. Determine

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.16)

and

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.17)

and explain what these relations say about the system.

## Solution: Uncertainty of $S_z$ with respect to ${\lvert {z+} \rangle}$

Noting that $S_z {\lvert {f} \rangle} = S_z {\lvert {z+} \rangle} = \hbar/2 {\lvert {z+} \rangle}$ we have

\begin{aligned}{\langle {f} \rvert} S_z {\lvert {f} \rangle} = \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(2.18)

The average outcome for many measurements of the physical quantity associated with the operator $S_z$ when the system has been prepared in the state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$ is $\hbar/2$.

\begin{aligned}\Bigl(S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \Bigr) {\lvert {f} \rangle}&= \frac{\hbar}{2} {\lvert {f} \rangle} -\frac{\hbar}{2} {\lvert {f} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.19)

We could also compute this from the matrix representations, but it is slightly more work.

Operating once more with $S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1}$ on the zero ket vector still gives us zero, so we have zero in the root for 2.16

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = 0\end{aligned} \hspace{\stretch{1}}(2.20)

What does 2.20 say about the state of the system? Given many measurements of the physical quantity associated with the operator $V = (S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1})^2$, where the initial state of the system is always ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$, then the average of the measurements of the physical quantity associated with $V$ is zero. We can think of the operator $V^{1/2} = S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1}$ as a representation of the observable, “how different is the measured result from the average ${\langle {f} \rvert} S_z {\lvert {f} \rangle}$”.

So, given a system prepared in state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$, and performance of repeated measurements capable of only examining spin-up, we find that the system is never any different than its initial spin-up state. We have no uncertainty that we will measure any difference from spin-up on average, when the system is prepared in the spin-up state.

## Solution: Uncertainty of $S_x$ with respect to ${\lvert {z+} \rangle}$

For this second part of the problem, we note that we can write

\begin{aligned}{\lvert {f} \rangle} = {\lvert {z+} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ).\end{aligned} \hspace{\stretch{1}}(2.21)

So the expectation value of $S_x$ with respect to this state is

\begin{aligned}{\langle {f} \rvert} S_x {\lvert {f} \rangle}&=\frac{1}{{2}}( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) S_x ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) \\ &=\hbar ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( {\lvert {x+} \rangle} - {\lvert {x-} \rangle} ) \\ &=\hbar ( 1 + 0 + 0 -1 ) \\ &= 0\end{aligned}

After repeated preparation of the system in state ${\lvert {f} \rangle}$, the average measurement of the physical quantity associated with operator $S_x$ is zero. In terms of the eigenstates for that operator ${\lvert {x+} \rangle}$ and ${\lvert {x-} \rangle}$ we have equal probability of measuring either given this particular initial system state.

For the variance calculation, this reduces our problem to the calculation of ${\langle {f} \rvert} S_x^2 {\lvert {f} \rangle}$, which is

\begin{aligned}{\langle {f} \rvert} S_x^2 {\lvert {f} \rangle} &=\frac{1}{{2}} \left( \frac{\hbar}{2} \right)^2 ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( (+1)^2 {\lvert {x+} \rangle} + (-1)^2 {\lvert {x-} \rangle} ) \\ &=\left( \frac{\hbar}{2} \right)^2,\end{aligned}

so for 2.22 we have

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = \frac{\hbar}{2}\end{aligned} \hspace{\stretch{1}}(2.22)

The average of the absolute magnitude of the physical quantity associated with operator $S_x$ is found to be $\hbar/2$ when repeated measurements are performed given a system initially prepared in state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$. We saw that the average value for the measurement of that physical quantity itself was zero, showing that we have equal probabilities of measuring either $\pm \hbar/2$ for this experiment. A measurement that would show the system was in the x-direction spin-up or spin-down states would find that these states are equi-probable.

I lost one mark on the group velocity response. Instead of 3.23 he wanted

\begin{aligned}v_g = {\left. \frac{\partial {\omega(k)}}{\partial {k}} \right\vert}_{k = k_0}= \frac{\hbar k_0}{m} = 0\end{aligned} \hspace{\stretch{1}}(3.23)

since $f(k)$ peaks at $k=0$.

I’ll have to go back and think about that a bit, because I’m unsure of the last bits of the reasoning there.

I also lost 0.5 and 0.25 (twice) because I didn’t explicitly state that the probability that the particle is at $x_0$, a specific single point, is zero. I thought that was obvious and didn’t have to be stated, but it appears expressing this explicitly is what he was looking for.

Curiously, one thing that I didn’t loose marks on was, the wrong answer for the probability per unit length. What he was actually asking for was the following

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 - \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x_0, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.24)

That’s a whole lot more sensible seeming quantity to calculate than what I did, but I don’t think that I can be faulted too much since the phrase was never used in the text nor in the lectures.

## Notes and problems for Desai chapter III.

Posted by peeterjoot on October 9, 2010

# Notes.

Chapter III notes and problems for [1].

FIXME:
Some puzzling stuff in the interaction section and superposition of time-dependent states sections. Work through those here.

# Problems

## Problem 1. Virial Theorem.

### Statement.

With the assumption that $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time, and

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{r}) = T + V\end{aligned} \hspace{\stretch{1}}(2.1)

show that

\begin{aligned}2 \left\langle{{T}}\right\rangle = \left\langle{{ \mathbf{r} \cdot \boldsymbol{\nabla} V}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(2.2)

### Solution.

I floundered with this a bit, but found the required hint in physicsforums. We can start with the Hamiltonian time derivative relation

\begin{aligned}i\hbar \frac{d A_H}{dt} = \left[{A_H},{H}\right]\end{aligned} \hspace{\stretch{1}}(2.3)

So, with the assumption that $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time, and the use of a stationary state ${\lvert {\psi} \rangle}$ for the expectation calculation we have

\begin{aligned}0 &=\frac{d}{dt} \left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle \\ &=\frac{d}{dt} {\langle {\psi} \rvert} \mathbf{r} \cdot \mathbf{p} {\lvert {\psi} \rangle} \\ &={\langle {\psi} \rvert} \frac{d}{dt} ( \mathbf{r} \cdot \mathbf{p} ) {\lvert {\psi} \rangle} \\ &= \frac{1}{{i\hbar}} \left\langle{{ \left[{ \mathbf{r} \cdot \mathbf{p} },{H}\right] }}\right\rangle \\ &= -\left\langle{{ \left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{\frac{\mathbf{p}^2}{2m}}\right] }}\right\rangle -\left\langle{{ \left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{V(\mathbf{r})}\right] }}\right\rangle.\end{aligned}

The exercise now becomes one of evaluating the remaining commutators. For the Laplacian commutator we have

\begin{aligned}\left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{\boldsymbol{\nabla}^2}\right] \psi&=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n x_m \partial_m \psi \\ &=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n \psi - \partial_n x_m \partial_n \partial_m \psi \\ &=x_m \partial_m \partial_n \partial_n \psi - \partial_n \partial_n \psi - \partial_n \partial_n \psi - x_m \partial_n \partial_n \partial_m \psi \\ &=- 2 \boldsymbol{\nabla}^2 \psi\end{aligned}

For the potential commutator we have

\begin{aligned}\left[{ \mathbf{r} \cdot \boldsymbol{\nabla} },{V(\mathbf{r})}\right] \psi&=x_m \partial_m V \psi -V x_m \partial_m \psi \\ &=x_m (\partial_m V) \psi x_m V \partial_m \psi -V x_m \partial_m \psi \\ &=\Bigl( \mathbf{r} \cdot (\boldsymbol{\nabla} V) \Bigr) \psi\end{aligned}

Putting all the $\hbar$ factors back in, we get

\begin{aligned}2 \left\langle{{ \frac{\mathbf{p}^2}{2m} }}\right\rangle = \left\langle{{ \mathbf{r} \cdot (\boldsymbol{\nabla} V) }}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.4)

which is the desired result.

Followup: why assume $\left\langle{{\mathbf{r} \cdot \mathbf{p}}}\right\rangle$ is independent of time?

## Problem 2. Application of virial theorem.

Calculate $\left\langle{{T}}\right\rangle$ with $V = \lambda \ln(r/a)$.

\begin{aligned}\mathbf{r} \cdot \boldsymbol{\nabla} V &= r \hat{\mathbf{r}} \cdot \hat{\mathbf{r}} \lambda \frac{\partial {\ln(r/a)}}{\partial {r}} \\ &= \lambda r \frac{1}{{a}} \frac{a}{r} \\ &= \lambda \\ \implies \\ \left\langle{{T}}\right\rangle &= \lambda/2\end{aligned}

## Problem 3. Heisenberg Position operator representation.

### Part I.

Express $x$ as an operator $x_H$ for $H = \mathbf{p}^2/2m$.

With

\begin{aligned}{\langle {\psi} \rvert} x {\lvert {\psi} \rangle} = {\langle {\psi_0} \rvert} U^\dagger x U {\lvert {\psi_0} \rangle}\end{aligned}

We want to expand

\begin{aligned}x_H &= U^\dagger x U \\ &= e^{i H t/\hbar} x e^{-iH t/\hbar} \\ &= \sum_{k,l = 0}^\infty \frac{1}{{k!}} \frac{1}{{l!}} \left(\frac{i H t}{\hbar}\right)^k x \left(\frac{-i H t}{\hbar}\right)^l .\end{aligned}

We to evaluate $H^k x H^l$ to proceed. Using $p^n x = -i \hbar n p^{n-1} + x p^n$, we have

\begin{aligned}H^k x &= \frac{1}{{(2m)^k}} p^2k x \\ &= \frac{1}{{(2m)^k}} \left( -i \hbar (2k) p^{2k -1} + x p^2k \right) \\ &= x H^k + \frac{1}{{2m}} (-i \hbar) (2k) p p^{2(k-1)}/(2m)^{k-1} \\ &= x H^k - \frac{i \hbar k}{m} p H^{k-1}.\end{aligned}

This gives us

\begin{aligned}x_H &= x - \frac{i \hbar p }{m} \sum_{k,l=0}^\infty \frac{k}{k!} \frac{1}{{l!}}\left(\frac{i t}{\hbar}\right)^k H^{k-1 + l}\left(\frac{-i t}{\hbar}\right)^l \\ &= x - \frac{i \hbar p i t }{m \hbar} \end{aligned}

Or

\begin{aligned}x_H &= x + \frac{p t }{m} \end{aligned} \hspace{\stretch{1}}(2.5)

### Part II.

Express $x$ as an operator $x_H$ for $H = \mathbf{p}^2/2m + V$ with $V = \lambda x^m$.

In retrospect, for the first part of this problem, it would have been better to use the series expansion for this exponential sandwich

Or, in explicit form

\begin{aligned}e^A B e^{-A}&=B + \frac{1}{{1!}} \left[{A},{B}\right]+ \frac{1}{{2!}} \left[{A},{\left[{A},{B}\right]}\right]+ \cdots\end{aligned} \hspace{\stretch{1}}(2.6)

Doing so, we’d find for the first commutator

\begin{aligned}\frac{i t}{2m \hbar} \left[{\mathbf{p}^2},{x}\right] = \frac{t p}{m},\end{aligned} \hspace{\stretch{1}}(2.7)

so that the series has only the first two terms, and we’d obtain the same result. That seems like a logical approach to try here too. For the first commutator, we get the same $tp/m$ result since $\left[{V},{x}\right] = 0$.

Employing

\begin{aligned}x^n p = i \hbar n x^{n-1} + p x^n,\end{aligned} \hspace{\stretch{1}}(2.8)

I find

\begin{aligned}\left( \frac{i t}{\hbar} \right)^2 \left[{H},{\left[{H},{x}\right]}\right] &= \frac{i \lambda t^2}{\hbar m } \left[{x^n},{p}\right] \\ &= - \frac{n t^2 \lambda}{m} x^{n-1} \\ &= - \frac{n t^2 V}{m x} \\ \end{aligned}

The triple commutator gets no prettier, and I get

\begin{aligned}\left( \frac{i t}{\hbar} \right)^3 \left[{H},{ \left[{H},{ \left[H, x\right] }\right] }\right]&= \frac{it}{\hbar} \left[{ \frac{\mathbf{p}^2}{2m} + \lambda x^n},{ - \frac{n t^2 V}{m x} }\right] \\ &= -\frac{it}{\hbar} \frac{n t^2 }{m } \frac{\lambda}{2m} \left[{\mathbf{p}^2},{ x^{n-1}}\right] \\ &= \cdots \\ &= \frac{n(n-1)t^3 V}{ 2 m^2 x^3 } (i \hbar n + 2 p x).\end{aligned}

Putting all the pieces together this gives

\begin{aligned}x_H =e^{iH t/\hbar} x e^{-iH t/\hbar} &= x + \frac{tp}{m} - \frac{n t^2 V}{ 2 m x} + \frac{n(n-1)t^3 V}{ 12 m^2 x^3 } (i \hbar n + 2 p x) + \cdots\end{aligned} \hspace{\stretch{1}}(2.9)

If there is a closed form for this it isn’t obvious to me. Would a fixed lower degree potential function shed any more light on this. How about the Harmonic oscillator Hamiltonian

\begin{aligned}H = \frac{p^2}{2m} + \frac{m \omega^2 }{2} x^2\end{aligned} \hspace{\stretch{1}}(2.10)

… this one works out nicely since there’s an even-odd alternation.

Get

\begin{aligned}x_H = x \cos (\omega^2 t^2 /2) + \frac{ p t }{m} \frac{\sin( \omega^2 t^2/2)}{ \omega^2 t^2/2 }\end{aligned} \hspace{\stretch{1}}(2.11)

I’d not expect such a tidy result for an arbitrary $V(x) = \lambda x^n$ potential.

## Problem 4. Feynman-Hellman relation.

For continuously parametrized eigenstate, eigenvalue and Hamiltonian ${\lvert {\psi(\lambda)} \rangle}$, $E(\lambda)$ and $H(\lambda)$ respectively, we can relate the derivatives

\begin{aligned}\frac{\partial {}}{\partial {\lambda}} ( H {\lvert {\psi} \rangle} ) &= \frac{\partial {}}{\partial {\lambda}} ( E {\lvert {\psi} \rangle} ) \\ \implies \\ \frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +H \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= \frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E \frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \end{aligned}

Left multiplication by ${\langle {\psi} \rvert}$ gives

\begin{aligned}{\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +{\langle {\psi} \rvert}H \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= {\langle {\psi} \rvert}\frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E {\langle {\psi} \rvert}\frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \\ \implies \\ {\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} +({\langle {\psi} \rvert}E) \frac{\partial {{\lvert {\psi} \rangle}}}{\partial {\lambda}} &= {\langle {\psi} \rvert}\frac{\partial {E}}{\partial {\lambda}} {\lvert {\psi} \rangle} + E {\langle {\psi} \rvert}\frac{\partial {{\lvert {\psi} \rangle} }}{\partial {\lambda}} \\ \implies \\ {\langle {\psi} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi} \rangle} &= \frac{\partial {E}}{\partial {\lambda}} \left\langle{{\psi}} \vert {{\psi}}\right\rangle,\end{aligned}

which provides the desired identity

\begin{aligned}\frac{\partial {E}}{\partial {\lambda}} = {\langle {\psi(\lambda)} \rvert}\frac{\partial {H}}{\partial {\lambda}} {\lvert {\psi(\lambda)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.12)

## Problem 5.

### Description.

With eigenstates ${\lvert {\phi_1} \rangle}$ and ${\lvert {\phi_2} \rangle}$, of $H$ with eigenvalues $E_1$ and $E_2$, respectively, and

\begin{aligned}{\lvert {\chi_1} \rangle} &= \frac{1}{{\sqrt{2}}}( {\lvert {\phi_1} \rangle} +{\lvert {\phi_2} \rangle}) \\ {\lvert {\chi_2} \rangle} &= \frac{1}{{\sqrt{2}}}( {\lvert {\phi_1} \rangle} -{\lvert {\phi_2} \rangle})\end{aligned}

and ${\lvert {\psi(0)} \rangle} = {\lvert {\chi_1} \rangle}$, determine ${\lvert {\psi(t)} \rangle}$ in terms of ${\lvert {\phi_1} \rangle}$ and ${\lvert {\phi_2} \rangle}$.

### Solution.

\begin{aligned}{\lvert {\psi(t)} \rangle}&= e^{-i H t /\hbar} {\lvert {\psi(0)} \rangle} \\ &= e^{-i H t /\hbar} {\lvert {\chi_1} \rangle} \\ &= \frac{1}{{\sqrt{2}}} e^{-i H t /\hbar} ( {\lvert {\phi_1} \rangle} -{\lvert {\phi_2} \rangle}) \\ &= \frac{1}{{\sqrt{2}}} (e^{-i E_1 t /\hbar} {\lvert {\phi_1} \rangle} -e^{-i E_2 t /\hbar} {\lvert {\phi_2} \rangle} )\qquad\square\end{aligned}

## Problem 6.

### Description.

Consider a Coulomb like potential $-\lambda/r$ with angular momentum $l=0$. If the eigenfunction is

\begin{aligned}u(r) = u_0 e^{-\beta r}\end{aligned} \hspace{\stretch{1}}(2.13)

determine $u_0$, $\beta$, and the energy eigenvalue $E$ in terms of $\lambda$, and $m$.

### Solution.

We can start with the normalization constant $u_0$ by integrating

\begin{aligned}1 &= u_0^2 \int_0^\infty dr e^{-\beta r} e^{-\beta r} \\ &=u_0^2 \left. \frac{e^{-2 \beta r}}{-2 \beta} \right\vert_{0^\infty} \\ &= u_0^2 \frac{1}{{2\beta}} \\ \end{aligned}

\begin{aligned}u_0 &= \sqrt{2\beta}\end{aligned} \hspace{\stretch{1}}(2.14)

To go further, we need the Hamiltonian. Note that we can write the Laplacian with the angular momentum operator factored out using

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{\mathbf{x}^2}} \left( (\mathbf{x} \cdot \boldsymbol{\nabla})^2 + \mathbf{x} \cdot \boldsymbol{\nabla} + (\mathbf{x} \times \boldsymbol{\nabla})^2 \right)\end{aligned} \hspace{\stretch{1}}(2.15)

With zero for the angular momentum operator $\mathbf{x} \times \boldsymbol{\nabla}$, and switching to spherical coordinates, we have

\begin{aligned}\boldsymbol{\nabla}^2 &= \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_r r \partial_r \\ &= \frac{1}{{r}} \partial_r + \frac{1}{{r}} \partial_r+ \frac{1}{{r}} r \partial_{rr} \\ &= \frac{2}{r} \partial_r + \partial_{rr} \\ \end{aligned}

We can now write the Hamiltonian for the zero angular momentum case

\begin{aligned}H&= -\frac{\hbar^2}{2m} \left( \frac{2}{r} \partial_r + \partial_{rr} \right) - \frac{\lambda}{r}\end{aligned} \hspace{\stretch{1}}(2.16)

With application of this Hamiltonian to the eigenfunction we have

\begin{aligned}E u_0 e^{-\beta r} &=\left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} \partial_r + \partial_{rr} \right) - \frac{\lambda}{r} \right) u_0 e^{-\beta r} \\ &=\left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} (-\beta) + \beta^2 \right) - \frac{\lambda}{r} \right) u_0 e^{-\beta r} .\end{aligned}

In particular for $r = \infty$ we have

\begin{aligned}-\frac{\hbar^2 \beta^2 }{2m} &= E\end{aligned} \hspace{\stretch{1}}(2.17)

\begin{aligned}-\frac{\hbar^2 \beta^2 }{2m} &= \left( -\frac{\hbar^2}{2m} \left( \frac{2}{r} (-\beta) + \beta^2 \right) - \frac{\lambda}{r} \right) \\ \implies \\ \frac{\hbar^2}{2m} \frac{2}{r} \beta &= \frac{\lambda}{r} \end{aligned}

Collecting all the results we have

\begin{aligned}\beta &= \frac{\lambda m}{\hbar^2} \\ E &= -\frac{\lambda^2 m}{2 \hbar^2} \\ u_0 &= \frac{\sqrt{2 \lambda m}}{\hbar}\end{aligned} \hspace{\stretch{1}}(2.18)

## Problem 7.

### Description.

A particle in a uniform field $\mathbf{E}_0$. Show that the expectation value of the position operator $\left\langle{\mathbf{r}}\right\rangle$ satisfies

\begin{aligned}m \frac{d^2 \left\langle{\mathbf{r}}\right\rangle }{dt^2} = e \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(2.21)

### Solution.

This follows from Ehrehfest’s theorem once we formulate the force $e \mathbf{E}_0 = -\boldsymbol{\nabla} \phi$, in terms of a potential $\phi$. That potential is

\begin{aligned}\phi = - e \mathbf{E}_0 \cdot (x,y,z)\end{aligned} \hspace{\stretch{1}}(2.22)

The Hamiltonian is therefore

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} - e \mathbf{E}_0 \cdot (x,y,z).\end{aligned} \hspace{\stretch{1}}(2.23)

Ehrehfest’s theorem gives us

\begin{aligned}\frac{d}{dt} \left\langle{{x_k}}\right\rangle &= \frac{1}{{m}} \left\langle{{p_k}}\right\rangle \\ \frac{d}{dt} \left\langle{{p_k}}\right\rangle &= -\left\langle{{ \frac{\partial {V}}{\partial {x_k}} }}\right\rangle,\end{aligned}

or

\begin{aligned}\frac{d^2}{dt^2} \left\langle{{x_k}}\right\rangle &= -\frac{1}{{m}} \left\langle{{ \frac{\partial {V}}{\partial {x_k}} }}\right\rangle.\end{aligned} \hspace{\stretch{1}}(2.24)

\begin{aligned}\frac{\partial {V}}{\partial {x_k}} &= - e (\mathbf{E}_0)_k\end{aligned}

Putting all the last bits together, and summing over the directions $\mathbf{e}_k$ we have

\begin{aligned}m \frac{d^2}{dt^2} \mathbf{e}_k \left\langle{{x_k}}\right\rangle = \mathbf{e}_k \left\langle{{ e (\mathbf{E}_0)_k }}\right\rangle= e \mathbf{E}_0\qquad\square\end{aligned}

## Problem 8.

### Description.

For Hamiltonian eigenstates ${\lvert {E_n} \rangle}$, $C = AB$, $A = \left[{B},{H}\right]$, obtain the matrix element ${\langle {E_m} \rvert} C {\lvert {E_n} \rangle}$ in terms of the matrix element of $A$.

### Solution.

I was able to get most of what was asked for here, with a small exception. I started with the matrix element for $A$, which is

\begin{aligned}{\langle {E_m} \rvert} A {\lvert {E_n} \rangle}={\langle {E_m} \rvert} BH - HB {\lvert {E_n} \rangle} =(E_n - E_m){\langle {E_m} \rvert} B {\lvert {E_n} \rangle} \end{aligned} \hspace{\stretch{1}}(2.25)

Next, computing the matrix element for $C$ we have

\begin{aligned}{\langle {E_m} \rvert} C {\lvert {E_n} \rangle}&={\langle {E_m} \rvert} BHB - HB^2 {\lvert {E_n} \rangle} \\ &=\sum_a {\langle {E_m} \rvert} BH {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} - E_m {\langle {E_m} \rvert} B {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a E_a {\langle {E_m} \rvert} B {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} -E_m {\langle {E_m} \rvert} B {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a (E_a - E_m){\langle {E_m} \rvert} B {\lvert {E_a} \rangle}{\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &=\sum_a {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &={\langle {E_m} \rvert} A {\lvert {E_n} \rangle} {\langle {E_n} \rvert} B {\lvert {E_n} \rangle} +\sum_{a \ne n} {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} {\langle {E_a} \rvert} B {\lvert {E_n} \rangle} \\ &={\langle {E_m} \rvert} A {\lvert {E_n} \rangle} {\langle {E_n} \rvert} B {\lvert {E_n} \rangle} +\sum_{a \ne n} {\langle {E_m} \rvert} A {\lvert {E_a} \rangle} \frac{{\langle {E_a} \rvert} A {\lvert {E_n} \rangle}}{E_n - E_a}\end{aligned}

Except for the ${\langle {E_n} \rvert} B {\lvert {E_n} \rangle}$ part of this expression, the problem as stated is complete. The relationship 2.25 is no help for with $n = m$, so I see no choice but to leave that small part of the expansion in terms of $B$.

## Problem 9.

### Description.

Operator $A$ has eigenstates ${\lvert {a_i} \rangle}$, with a unitary change of basis operation $U {\lvert {a_i} \rangle} = {\lvert {b_i} \rangle}$. Determine in terms of $U$, and $A$ the operator $B$ and its eigenvalues for which ${\lvert {b_i} \rangle}$ are eigenstates.

### Solution.

Consider for motivation the matrix element of $A$ in terms of ${\lvert {b_i} \rangle}$. We will also let $A {\lvert {a_i} \rangle} = \alpha_i {\lvert {a_i} \rangle}$. We then have

\begin{aligned}{\langle {a_i} \rvert} A {\lvert {a_j} \rangle}&={\langle {b_i} \rvert} U A U^\dagger {\lvert {b_j} \rangle} \\ \end{aligned}

We also have

\begin{aligned}{\langle {a_i} \rvert} A {\lvert {a_j} \rangle}&=a_j {\langle {a_i} \rvert} {\lvert {a_j} \rangle} \\ &=a_j \delta_{ij}\end{aligned}

So it appears that the operator $U A U^\dagger$ has the orthonormality relation required. In terms of action on the basis $\{{\lvert {b_i} \rangle}\}$, let’s see how it behaves. We have

\begin{aligned}U A U^\dagger {\lvert {b_i} \rangle}&= U A {\lvert {a_i} \rangle} \\ &= U \alpha_i {\lvert {a_i} \rangle} \\ &= \alpha_i {\lvert {b_i} \rangle} \\ \end{aligned}

So we see that the operators $A$ and $B = U A U^\dagger$ have common eigenvalues.

## Problem 10.

### Description.

With $H {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}$, $A = \left[{H},{F}\right]$ and ${\langle {0} \rvert} F {\lvert {0} \rangle} = 0$, show that

\begin{aligned}\sum_{n\ne 0} \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle} }{E_n - E_0} = {\langle {0} \rvert} AF {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(2.26)

### Solution.

\begin{aligned}{\langle {0} \rvert} AF {\lvert {0} \rangle}&={\langle {0} \rvert} HF F - FH F{\lvert {0} \rangle} \\ &=\sum_n E_0 {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} - E_n {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F{\lvert {0} \rangle} \\ &=\sum_n (E_0 -E_n) {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} \\ &=\sum_{n\ne0} (E_0 -E_n) {\langle {0} \rvert} F {\lvert {n} \rangle}{\langle {n} \rvert} F {\lvert {0} \rangle} \\ \end{aligned}

We also have

\begin{aligned}{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}&={\langle {0} \rvert} HF -F H {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle} \\ &=(E_0 - E_n) {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} HF - FH {\lvert {0} \rangle} \\ &=-(E_0 - E_n)^2 {\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F {\lvert {0} \rangle} \\ \end{aligned}

Or, for $n \ne 0$,

\begin{aligned}{\langle {0} \rvert} F {\lvert {n} \rangle} {\langle {n} \rvert} F {\lvert {0} \rangle} &=-\frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{(E_0 - E_n)^2 }.\end{aligned}

This gives

\begin{aligned}{\langle {0} \rvert} AF {\lvert {0} \rangle}&=-\sum_{n\ne0} (E_0 -E_n) \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{(E_0 - E_n)^2 } \\ &=\sum_{n\ne0} \frac{{\langle {0} \rvert} A {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {0} \rangle}}{E_n - E_0 } \qquad\square\end{aligned}

## Problem 11. commutator of angular momentum with Hamiltonian.

Show that $\left[{\mathbf{L}},{H}\right] = 0$, where $H = \mathbf{p}^2/2m + V(r)$.

This follows by considering $\left[{\mathbf{L}},{\mathbf{p}^2}\right]$, and $\left[{\mathbf{L}},{V(r)}\right]$. Let

\begin{aligned}L_{jk} = x_j p_k - x_k p_j,\end{aligned} \hspace{\stretch{1}}(2.27)

so that

\begin{aligned}\mathbf{L} = \mathbf{e}_i \epsilon_{ijk} L_{jk}.\end{aligned} \hspace{\stretch{1}}(2.28)

We now need to consider the commutators of the operators $L_{jk}$ with $\mathbf{p}^2$ and $V(r)$.

Let’s start with $p^2$. In particular

\begin{aligned}\mathbf{p}^2 x_m p_n&=p_k p_k x_m p_n \\ &=p_k (p_k x_m) p_n \\ &=p_k (-i\hbar \delta_{km} + x_m p_k) p_n \\ &=-i\hbar p_m p_n + (p_k x_m) p_k p_n \\ &=-i\hbar p_m p_n + (-i \hbar \delta_{km} + x_m p_k ) p_k p_n \\ &=-2 i\hbar p_m p_n + x_m p_n \mathbf{p}^2.\end{aligned}

So our commutator with $\mathbf{p}^2$ is

\begin{aligned}\left[{L_{jk}},{\mathbf{p}^2}\right]&=(x_j p_k - x_j p_k) \mathbf{p}^2 -( -2 i\hbar p_j p_k + x_j p_k \mathbf{p}^2 +2 i\hbar p_k p_j - x_k p_j \mathbf{p}^2 ).\end{aligned}

Since $p_j p_k = p_k p_j$, all terms cancel out, and the problem is reduced to showing that

\begin{aligned}\left[{\mathbf{L}},{H}\right] &= \left[{\mathbf{L}},{V(r)}\right] = 0.\end{aligned}

Now assume that $V(r)$ has a series representation

\begin{aligned}V(r) &= \sum_j a_j r^j = \sum_j a_j (x_k x_k)^{j/2}\end{aligned}

We’d like to consider the action of $x_m p_n$ on this function

\begin{aligned}x_m p_n V(r) \Psi&= -i \hbar x_m \sum_j a_j \partial_n (x_k x_k)^{j/2} \Psi \\ &= -i \hbar x_m \sum_j a_j (j x_n (x_k x_k)^{j/2-1} + r^j \partial_n \Psi) \\ &= -\frac{i \hbar x_m x_n}{r^2} \sum_j a_j j r^j + x_m V(r) p_n \Psi\end{aligned}

\begin{aligned}L_{mn} V(r) &=(x_m p_n - x_n p_m) V(r) \\ &= -\frac{i \hbar x_m x_n}{r^2} \sum_j a_j j r^j+\frac{i \hbar x_n x_m}{r^2} \sum_j a_j j r^j + V(r) (x_m p_n - x_n p_m )\\ &= V(r) L_{mn}\end{aligned}

Thus $\left[{L_{mn}},{V(r)}\right] = 0$ as expected, implying $\left[{\mathbf{L}},{H}\right] = 0$.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.