Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘eigenvalue’

Exponential solutions to second order linear system

Posted by peeterjoot on October 20, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

We’re discussing specific forms to systems of coupled linear differential equations, such as a loop of “spring” connected masses (i.e. atoms interacting with harmonic oscillator potentials) as sketched in fig. 1.1.

Fig 1.1: Three springs loop

 

Instead of assuming a solution, let’s see how far we can get attacking this problem systematically.

Matrix methods

Suppose that we have a set of N masses constrained to a circle interacting with harmonic potentials. The Lagrangian for such a system (using modulo N indexing) is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} \sum_{k = 0}^{2} m_k \dot{u}_k^2 - \frac{1}{{2}} \sum_{k = 0}^2 \kappa_k \left( u_{k+1} - u_k \right)^2.\end{aligned} \hspace{\stretch{1}}(1.1)

The force equations follow directly from the Euler-Lagrange equations

\begin{aligned}0 = \frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{u}_{n, \alpha}}}- \frac{\partial {\mathcal{L}}}{\partial {u_{n, \alpha}}}.\end{aligned} \hspace{\stretch{1}}(1.2)

For the simple three particle system depicted above, this is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m_0 \dot{u}_0^2 +\frac{1}{{2}} m_1 \dot{u}_1^2 +\frac{1}{{2}} m_2 \dot{u}_2^2 - \frac{1}{{2}} \kappa_0 \left( u_1 - u_0 \right)^2- \frac{1}{{2}} \kappa_1 \left( u_2 - u_1 \right)^2- \frac{1}{{2}} \kappa_2 \left( u_0 - u_2 \right)^2,\end{aligned} \hspace{\stretch{1}}(1.3)

with equations of motion

\begin{aligned}\begin{aligned}0 &= m_0 \dot{d}{u}_0 + \kappa_0 \left( u_0 - u_1 \right) + \kappa_2 \left( u_0 - u_2 \right) \\ 0 &= m_1 \dot{d}{u}_1 + \kappa_1 \left( u_1 - u_2 \right) + \kappa_0 \left( u_1 - u_0 \right) \\ 0 &= m_2 \dot{d}{u}_2 + \kappa_2 \left( u_2 - u_0 \right) + \kappa_1 \left( u_2 - u_1 \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.4)

Let’s partially non-dimensionalize this. First introduce average mass \bar{{m}} and spring constants \bar{{\kappa}}, and rearrange slightly

\begin{aligned}\begin{aligned}\frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_0 &= -\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} \left( u_0 - u_1 \right) - \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} \left( u_0 - u_2 \right) \\ \frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_1 &= -\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} \left( u_1 - u_2 \right) - \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} \left( u_1 - u_0 \right) \\ \frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_2 &= -\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} \left( u_2 - u_0 \right) - \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} \left( u_2 - u_1 \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.5)

With

\begin{aligned}\tau = \sqrt{\frac{\bar{{k}}}{\bar{{m}}}} t = \Omega t\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

\begin{aligned}\mathbf{u} = \begin{bmatrix}u_0 \\ u_1 \\ u_2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

\begin{aligned}B = \begin{bmatrix}-\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} - \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} &\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} &\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} \\ \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} &-\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} - \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} &\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} \\ \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} & \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} &-\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} - \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

Our system takes the form

\begin{aligned}\frac{d^2 \mathbf{u}}{d\tau^2} = B \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

We can at least theoretically solve this in a simple fashion if we first convert it to a first order system. We can do that by augmenting our vector of displacements with their first derivatives

\begin{aligned}\mathbf{w} =\begin{bmatrix}\mathbf{u} \\ \frac{d \mathbf{u}}{d\tau} \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

So that

\begin{aligned}\frac{d \mathbf{w}}{d\tau} =\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} \mathbf{w}= A \mathbf{w}.\end{aligned} \hspace{\stretch{1}}(1.0.9)

Now the solution is conceptually trivial

\begin{aligned}\mathbf{w} = e^{A \tau} \mathbf{w}_\circ.\end{aligned} \hspace{\stretch{1}}(1.0.9)

We are however, faced with the task of exponentiating the matrix A. All the powers of this matrix A will be required, but they turn out to be easy to calculate

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^2=\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}B & 0 \\ 0 & B\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.11a)

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^3=\begin{bmatrix}B & 0 \\ 0 & B\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}0 & B \\ B^2 & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.11b)

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^4=\begin{bmatrix}0 & B \\ B^2 & 0\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}B^2 & 0 \\ 0 & B^2\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

allowing us to write out the matrix exponential

\begin{aligned}e^{A \tau} = \sum_{k = 0}^\infty \frac{\tau^{2k}}{(2k)!} \begin{bmatrix}B^k & 0 \\ 0 & B^k\end{bmatrix}+\sum_{k = 0}^\infty \frac{\tau^{2k + 1}}{(2k + 1)!} \begin{bmatrix}0 & B^k \\ B^{k+1} & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

Case I: No zero eigenvalues

Provided that B has no zero eigenvalues, we could factor this as

\begin{aligned}\begin{bmatrix}0 & B^k \\ B^{k+1} & 0\end{bmatrix}=\begin{bmatrix}0 & B^{-1/2} \\ B^{1/2} & 0\end{bmatrix}\begin{bmatrix}B^{k + 1/2} & 0 \\ 0 & B^{k + 1/2}\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

This initially leads us to believe the following, but we’ll find out that the three springs interaction matrix B does have a zero eigenvalue, and we’ll have to be more careful. If there were any such interaction matrices that did not have such a zero we could simply write

\begin{aligned}e^{A \tau} = \sum_{k = 0}^\infty \frac{\tau^{2k}}{(2k)!} \begin{bmatrix}\sqrt{B}^{2k} & 0 \\ 0 & \sqrt{B}^{2k}\end{bmatrix}+\begin{bmatrix}0 & B^{-1/2} \\ B^{1/2} & 0\end{bmatrix}\sum_{k = 0}^\infty \frac{\tau^{2k + 1}}{(2k + 1)!} \begin{bmatrix}\sqrt{B}^{2 k + 1} & 0 \\ 0 & \sqrt{B}^{2 k+1} \end{bmatrix}=\cosh\begin{bmatrix}\sqrt{B} \tau & 0 \\ 0 & \sqrt{B} \tau\end{bmatrix}+ \begin{bmatrix}0 & 1/\sqrt{B} \tau \\ \sqrt{B} \tau & 0\end{bmatrix}\sinh\begin{bmatrix}\sqrt{B} \tau & 0 \\ 0 & \sqrt{B} \tau\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

This is

\begin{aligned}e^{A \tau}=\begin{bmatrix}\cosh \sqrt{B} \tau & (1/\sqrt{B}) \sinh \sqrt{B} \tau \\ \sqrt{B} \sinh \sqrt{B} \tau & \cosh \sqrt{B} \tau \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

The solution, written out is

\begin{aligned}\begin{bmatrix}\mathbf{u} \\ \mathbf{u}'\end{bmatrix}=\begin{bmatrix}\cosh \sqrt{B} \tau & (1/\sqrt{B}) \sinh \sqrt{B} \tau \\ \sqrt{B} \sinh \sqrt{B} \tau & \cosh \sqrt{B} \tau \end{bmatrix}\begin{bmatrix}\mathbf{u}_\circ \\ \mathbf{u}_\circ'\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

so that

\begin{aligned}\boxed{\mathbf{u} = \cosh \sqrt{B} \tau \mathbf{u}_\circ + \frac{1}{{\sqrt{B}}} \sinh \sqrt{B} \tau \mathbf{u}_\circ'.}\end{aligned} \hspace{\stretch{1}}(1.0.11c)

As a check differentiation twice shows that this is in fact the general solution, since we have

\begin{aligned}\mathbf{u}' = \sqrt{B} \sinh \sqrt{B} \tau \mathbf{u}_\circ + \cosh \sqrt{B} \tau \mathbf{u}_\circ',\end{aligned} \hspace{\stretch{1}}(1.0.11c)

and

\begin{aligned}\mathbf{u}'' = B \cosh \sqrt{B} \tau \mathbf{u}_\circ + \sqrt{B} \sinh \sqrt{B} \tau \mathbf{u}_\circ'= B \left( \cosh \sqrt{B} \tau \mathbf{u}_\circ + \frac{1}{{\sqrt{B}}} \sinh \sqrt{B} \tau \mathbf{u}_\circ' \right)= B \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

Observe that this solution is a general solution to second order constant coefficient linear systems of the form we have in eq. 1.5. However, to make it meaningful we do have the additional computational task of performing an eigensystem decomposition of the matrix B. We expect negative eigenvalues that will give us oscillatory solutions (ie: the matrix square roots will have imaginary eigenvalues).

Example: An example diagonalization to try things out

{example:threeSpringLoop:1}{

Let’s do that diagonalization for the simplest of the three springs system as an example, with \kappa_j = \bar{{k}} and m_j = \bar{{m}}, so that we have

\begin{aligned}B = \begin{bmatrix}-2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & -2 \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.20)

A orthonormal eigensystem for B is

\begin{aligned}\left\{\mathbf{e}_{-3, 1}, \mathbf{e}_{-3, 2}, \mathbf{e}_{0, 1} \right\}=\left\{\frac{1}{{\sqrt{6}}}\begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix},\frac{1}{{\sqrt{2}}}\begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix},\frac{1}{{\sqrt{3}}}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\right\}.\end{aligned} \hspace{\stretch{1}}(1.21)

With

\begin{aligned}\begin{aligned}U &= \frac{1}{{\sqrt{6}}}\begin{bmatrix} -\sqrt{3} & -1 & \sqrt{2} \\ 0 & 2 & \sqrt{2} \\ \sqrt{3} & -1 & \sqrt{2} \end{bmatrix} \\ D &= \begin{bmatrix}-3 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.22)

We have

\begin{aligned}B = U D U^\text{T},\end{aligned} \hspace{\stretch{1}}(1.23)

We also find that B and its root are intimately related in a surprising way

\begin{aligned}\sqrt{B} = \sqrt{3} iU \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}U^\text{T}=\frac{1}{{\sqrt{3} i}} B.\end{aligned} \hspace{\stretch{1}}(1.24)

We also see, unfortunately that B has a zero eigenvalue, so we can’t compute 1/\sqrt{B}. We’ll have to back and up and start again differently.
}

Case II: allowing for zero eigenvalues

Now that we realize we have to deal with zero eigenvalues, a different approach suggests itself. Instead of reducing our system using a Hamiltonian transformation to a first order system, let’s utilize that diagonalization directly. Our system is

\begin{aligned}\mathbf{u}'' = B \mathbf{u} = U D U^{-1} \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.25)

where D = [ \lambda_i \delta_{ij} ] and

\begin{aligned}\left( U^{-1} \mathbf{u} \right)'' = D \left( U^{-1} \mathbf{u} \right).\end{aligned} \hspace{\stretch{1}}(1.26)

Let

\begin{aligned}\mathbf{z} = U^{-1} \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.27)

so that our system is just

\begin{aligned}\mathbf{z}'' = D \mathbf{z},\end{aligned} \hspace{\stretch{1}}(1.28)

or

\begin{aligned}z_i'' = \lambda_i z_i.\end{aligned} \hspace{\stretch{1}}(1.29)

This is N equations, each decoupled and solvable by inspection. Suppose we group the eigenvalues into sets \{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0 \}. Our solution is then

\begin{aligned}\mathbf{z} = \sum_{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0}\left( a_n \cos \sqrt{-\lambda_n} \tau + b_n \sin \sqrt{-\lambda_n} \tau \right)\mathbf{e}_n+\left( a_p \cosh \sqrt{\lambda_p} \tau + b_p \sinh \sqrt{\lambda_p} \tau \right)\mathbf{e}_p+\left( a_z + b_z \tau \right) \mathbf{e}_z.\end{aligned} \hspace{\stretch{1}}(1.30)

Transforming back to lattice coordinates using \mathbf{u} = U \mathbf{z}, we have

\begin{aligned}\mathbf{u} = \sum_{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0}\left( a_n \cos \sqrt{-\lambda_n} \tau + b_n \sin \sqrt{-\lambda_n} \tau \right)U \mathbf{e}_n+\left( a_p \cosh \sqrt{\lambda_p} \tau + b_p \sinh \sqrt{\lambda_p} \tau \right)U \mathbf{e}_p+\left( a_z + b_z \tau \right) U \mathbf{e}_z.\end{aligned} \hspace{\stretch{1}}(1.31)

We see that the zero eigenvalues integration terms have no contribution to the lattice coordinates, since U \mathbf{e}_z = \lambda_z \mathbf{e}_z = 0, for all \lambda_z = 0.

If U = [ \mathbf{e}_i ] are a set of not necessarily orthonormal eigenvectors for B, then the vectors \mathbf{f}_i, where \mathbf{e}_i \cdot \mathbf{f}_j = \delta_{ij} are the reciprocal frame vectors. These can be extracted from U^{-1} = [ \mathbf{f}_i ]^\text{T} (i.e., the rows of U^{-1}). Taking dot products between \mathbf{f}_i with \mathbf{u}(0) = \mathbf{u}_\circ and \mathbf{u}'(0) = \mathbf{u}'_\circ, provides us with the unknown coefficients a_n, b_n

\begin{aligned}\mathbf{u}(\tau)= \sum_{ \lambda_n < 0, \lambda_p > 0 }\left( (\mathbf{u}_\circ \cdot \mathbf{f}_n) \cos \sqrt{-\lambda_n} \tau + \frac{\mathbf{u}_\circ' \cdot \mathbf{f}_n}{\sqrt{-\lambda_n} } \sin \sqrt{-\lambda_n} \tau \right)\mathbf{e}_n+\left( (\mathbf{u}_\circ \cdot \mathbf{f}_p) \cosh \sqrt{\lambda_p} \tau + \frac{\mathbf{u}_\circ' \cdot \mathbf{f}_p}{\sqrt{-\lambda_p} } \sinh \sqrt{\lambda_p} \tau \right)\mathbf{e}_p.\end{aligned} \hspace{\stretch{1}}(1.32)

Supposing that we constrain ourself to looking at just the oscillatory solutions (i.e. the lattice does not shake itself to pieces), then we have

\begin{aligned}\boxed{\mathbf{u}(\tau)= \sum_{ \lambda_n < 0 }\left( \sum_j \mathbf{e}_{n,j} \mathbf{f}_{n,j}^\text{T} \right)\left( \mathbf{u}_\circ \cos \sqrt{-\lambda_n} \tau + \frac{\mathbf{u}_\circ' }{\sqrt{-\lambda_n} } \sin \sqrt{-\lambda_n} \tau \right).}\end{aligned} \hspace{\stretch{1}}(1.33)

Eigenvectors for eigenvalues that were degenerate have been explicitly enumerated here, something previously implied. Observe that the dot products of the form (\mathbf{a} \cdot \mathbf{f}_i) \mathbf{e}_i have been put into projector operator form to group terms more nicely. The solution can be thought of as a weighted projector operator working as a time evolution operator from the initial state.

Example: Our example interaction revisited

{example:threeSpringLoop:2}{

Recall that we had an orthonormal basis for the \lambda = -3 eigensubspace for the interaction example of eq. 1.20 again, so \mathbf{e}_{-3,i} = \mathbf{f}_{-3, i}. We can sum \mathbf{e}_{-3,1} \mathbf{e}_{-3, 1}^\text{T} + \mathbf{e}_{-3,2} \mathbf{e}_{-3, 2}^\text{T} to find

\begin{aligned}\mathbf{u}(\tau)= \frac{1}{{3}}\begin{bmatrix}2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2\end{bmatrix}\left( \mathbf{u}_\circ \cos \sqrt{3} \tau + \frac{ \mathbf{u}_\circ' }{\sqrt{3} } \sin \sqrt{3} \tau \right).\end{aligned} \hspace{\stretch{1}}(1.34)

The leading matrix is an orthonormal projector of the initial conditions onto the eigen subspace for \lambda_n = -3. Observe that this is proportional to B itself, scaled by the square of the non-zero eigenvalue of \sqrt{B}. From this we can confirm by inspection that this is a solution to \mathbf{u}'' = B \mathbf{u}, as desired.
}

Fourier transform methods

Let’s now try another item from our usual toolbox on these sorts of second order systems, the Fourier transform. For a one variable function of time let’s write the transform pair as

\begin{aligned}x(t) = \int_{-\infty}^{\infty} \tilde{x}(\omega) e^{-i \omega t} d\omega\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}\tilde{x}(\omega) = \frac{1}{{2\pi}} \int_{-\infty}^{\infty} x(t) e^{i \omega t} dt\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

One mass harmonic oscillator

The simplest second order system is that of the harmonic oscillator

\begin{aligned}0 = \dot{d}{x}(t) + \omega_\circ^2 x.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Application of the transform gives

\begin{aligned}0 = \left( \frac{d^2}{dt^2} + \omega_\circ^2 \right)\int_{-\infty}^{\infty} \tilde{x}(\omega) e^{-i \omega t} d\omega= \int_{-\infty}^{\infty} \left( -\omega^2 + \omega_\circ^2 \right)\tilde{x}(\omega) e^{-i \omega t} d\omega.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

We clearly have a constraint that is a function of frequency, but one that has to hold for all time. Let’s transform this constraint to the frequency domain to consider that constraint independent of time.

\begin{aligned}0 =\frac{1}{{2 \pi}}\int_{-\infty}^{\infty} dt e^{ i \omega t}\int_{-\infty}^{\infty} \left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') e^{-i \omega' t} d\omega'=\int_{-\infty}^{\infty} d\omega'\left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') \frac{1}{{2 \pi}}\int_{-\infty}^{\infty} dt e^{ i (\omega -\omega') t}=\int_{-\infty}^{\infty} d\omega'\left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') \delta( \omega - \omega' )=\left( -{\omega}^2 + \omega_\circ^2 \right)\tilde{x}(\omega).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

How do we make sense of this?

Since \omega is an integration variable, we can’t just mandate that it equals the constant driving frequency \pm \omega_\circ. It’s clear that we require a constraint on the transform \tilde{x}(\omega) as well. As a trial solution, imagine that

\begin{aligned}\tilde{x}(\omega) = \left\{\begin{array}{l l}\tilde{x}_\circ & \quad \mbox{if latex \left\lvert {\omega – \pm \omega_\circ} \right\rvert < \omega_{\text{cutoff}}$} \\ 0 & \quad \mbox{otherwise}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)$

This gives us

\begin{aligned}0 = \tilde{x}_\circ\int_{\pm \omega_\circ -\omega_{\text{cutoff}}}^{\pm \omega_\circ +\omega_{\text{cutoff}}}\left( \omega^2 - \omega_\circ^2 \right)e^{-i \omega t} d\omega.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Now it is clear that we can satisfy our constraint only if the interval [\pm \omega_\circ -\omega_{\text{cutoff}}, \pm \omega_\circ + \omega_{\text{cutoff}}] is made infinitesimal. Specifically, we require both a \omega^2 = \omega_\circ^2 constraint and that the transform \tilde{x}(\omega) have a delta function nature. That is

\begin{aligned}\tilde{x}(\omega) = A \delta(\omega - \omega_\circ)+ B \delta(\omega + \omega_\circ).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Substitution back into our transform gives

\begin{aligned}x(t) = A e^{-i \omega_\circ t}+ B e^{i \omega_\circ t}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

We can verify quickly that this satisfies our harmonic equation \dot{d}{x} = -\omega_\circ x.

Two mass harmonic oscillator

Having applied the transform technique to the very simplest second order system, we can now consider the next more complex system, that of two harmonically interacting masses (i.e. two masses on a frictionless spring).

F1

Our system is described by

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m_1 \dot{u}_1^2+ \frac{1}{{2}} m_2 \dot{u}_2^2-\frac{1}{{2}} \kappa \left( u_2 - u_1 \right)^2,\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

and the pair of Euler-Lagrange equations

\begin{aligned}0 = \frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{u}_i}}- \frac{\partial {\mathcal{L}}}{\partial {u_i}}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

The equations of motion are

\begin{aligned}\begin{aligned}0 &= m_1 \dot{d}{u}_1 + \kappa \left( u_1 - u_2 \right) \\ 0 &= m_2 \dot{d}{u}_2 + \kappa \left( u_2 - u_1 \right) \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Let

\begin{aligned}\begin{aligned}u_1(t) &= \int_{-\infty}^\infty \tilde{u}_1(\omega) e^{-i \omega t} d\omega \\ u_2(t) &= \int_{-\infty}^\infty \tilde{u}_2(\omega) e^{-i \omega t} d\omega.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Insertion of these transform pairs into our equations of motion produces a pair of simultaneous integral equations to solve

\begin{aligned}\begin{aligned}0 &= \int_{-\infty}^\infty \left( \left( -m_1 \omega^2 + \kappa \right) \tilde{u}_1(\omega) - \kappa \tilde{u}_2(\omega) \right)e^{-i \omega t} d\omega \\ 0 &= \int_{-\infty}^\infty \left( \left( -m_2 \omega^2 + \kappa \right) \tilde{u}_2(\omega) - \kappa \tilde{u}_1(\omega) \right)e^{-i \omega t} d\omega.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

As with the single spring case, we can decouple these equations with an inverse transformation operation \int e^{i \omega' t}/2\pi, which gives us (after dropping primes)

\begin{aligned}0 = \begin{bmatrix}\left( -m_1 \omega^2 + \kappa \right) &- \kappa \\ - \kappa &\left( -m_2 \omega^2 + \kappa \right) \end{bmatrix}\begin{bmatrix}\tilde{u}_1(\omega) \\ \tilde{u}_2(\omega)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Taking determinants gives us the constraint on the frequency

\begin{aligned}0 = \left( -m_1 \omega^2 + \kappa \right) \left( -m_2 \omega^2 + \kappa \right) - \kappa^2=m_1 m_2 \omega^4-(m_1 + m_2) \omega^2 =\omega^2 \left( m_1 m_2 \omega^2 -\kappa (m_1 + m_2) \right).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Introducing a reduced mass

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{m_1}+\frac{1}{m_2},\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

the pair of solutions are

\begin{aligned}\begin{aligned}\omega^2 &= 0 \\ \omega^2 &=\frac{\kappa}{\mu}\equiv \omega_\circ^2.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

As with the single mass oscillator, we require the functions \tilde{u}{\omega} to also be expressed as delta functions. The frequency constraint and that delta function requirement together can be expressed, for j \in \{0, 1\} as

\begin{aligned}\tilde{u}_j(\omega) = A_{j+} \delta( \omega - \omega_\circ )+ A_{j0} \delta( \omega )+ A_{j-} \delta( \omega + \omega_\circ ).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

With a transformation back to time domain, we have functions of the form

\begin{aligned}\begin{aligned}u_1(t) &= A_{j+} e^{ -i \omega_\circ t }+ A_{j0} + A_{j-} e^{ i \omega_\circ t } \\ u_2(t) &= B_{j+} e^{ -i \omega_\circ t }+ B_{j0} + B_{j-} e^{ i \omega_\circ t }.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Back insertion of these into the equations of motion, we have

\begin{aligned}\begin{aligned}0 &=-m_1 \omega_\circ^2\left( A_{j+} e^{ -i \omega_\circ t } + A_{j-} e^{ i \omega_\circ t } \right)+ \kappa\left( \left( A_{j+} - B_{j+} \right) e^{ -i \omega_\circ t } + \left( A_{j-} - B_{j-} \right) e^{ i \omega_\circ t } + A_{j0} - B_{j0} \right) \\ 0 &=-m_2 \omega_\circ^2\left( B_{j+} e^{ -i \omega_\circ t } + B_{j-} e^{ i \omega_\circ t } \right)+ \kappa\left( \left( B_{j+} - A_{j+} \right) e^{ -i \omega_\circ t } + \left( B_{j-} - A_{j-} \right) e^{ i \omega_\circ t } + B_{j0} - A_{j0} \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Equality requires identity for all powers of e^{i \omega_\circ t}, or

\begin{aligned}\begin{aligned}0 &= B_{j0} - A_{j0} \\ 0 &= -m_1 \omega_\circ^2 A_{j+} + \kappa \left( A_{j+} - B_{j+} \right) \\ 0 &= -m_1 \omega_\circ^2 A_{j-} + \kappa \left( A_{j-} - B_{j-} \right) \\ 0 &= -m_2 \omega_\circ^2 B_{j+} + \kappa \left( B_{j+} - A_{j+} \right) \\ 0 &= -m_2 \omega_\circ^2 B_{j-} + \kappa \left( B_{j-} - A_{j-} \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

or B_{j0} = A_{j0} and

\begin{aligned}0 =\begin{bmatrix}\kappa -m_1 \omega_\circ^2 & 0 & - \kappa & 0 \\ 0 & \kappa -m_1 \omega_\circ^2 & 0 & - \kappa \\ -\kappa & 0 & \kappa -m_2 \omega_\circ^2 & 0 \\ 0 & -\kappa & 0 & \kappa -m_2 \omega_\circ^2 \end{bmatrix}\begin{bmatrix}A_{j+} \\ A_{j-} \\ B_{j+} \\ B_{j-} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Observe that

\begin{aligned}\kappa - m_1 \omega_\circ^2=\kappa - m_1 \kappa \left( \frac{1}{{m_1}} + \frac{1}{{m_2}} \right)=\kappa \left( 1 - 1 - \frac{m_1}{m_2} \right)= -\kappa \frac{m_1}{m_2},\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

(with a similar alternate result). We can rewrite eq. 1.0.35.35 as

\begin{aligned}0 =-\kappa\begin{bmatrix}m_1/m_2 & 0 & 1 & 0 \\ 0 & m_1/m_2 & 0 & 1 \\ 1 & 0 & m_2/m_1 & 0 \\ 0 & 1 & 0 & m_2/m_1 \end{bmatrix}\begin{bmatrix}A_{j+} \\ A_{j-} \\ B_{j+} \\ B_{j-} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

It’s clear that there’s two pairs of linear dependencies here, so the determinant is zero as expected. We can read off the remaining relations. Our undetermined coefficients are given by

\begin{aligned}\begin{aligned}B_{j0} &= A_{j0} \\ m_1 A_{j+} &= -m_2 B_{j+} \\ m_1 A_{j-} &= -m_2 B_{j-} \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}\boxed{\begin{aligned}u_1(t) &= a+ m_2 b e^{ -i \omega_\circ t }+ m_2 c e^{ i \omega_\circ t } \\ u_2(t) &= a- m_1 b e^{ -i \omega_\circ t }- m_1 c e^{ i \omega_\circ t }.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Observe that the constant term is not really of interest, since it represents a constant displacement of both atoms (just a change of coordinates).

Check:

\begin{aligned}u_1(t) - u_2(t) = + (m_1 + m_2)b e^{ -i \omega_\circ t }+ (m_1 + m_2)c e^{ i \omega_\circ t },\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}m_1 \dot{d}{u}_1(t) + \kappa (u_1(t) - u_2(t) )=-m_1 m_2 \omega_\circ^2 \left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)+(m_1 + m_2) \kappa \left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)=\left( -m_1 m_2 \omega_\circ^2 + (m_1 + m_2) \kappa \right)\left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)=\left( -m_1 m_2 \kappa \frac{m_1 + m_2}{m_1 m_2} + (m_1 + m_2) \kappa \right)\left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Reflection

We’ve seen that we can solve any of these constant coefficient systems exactly using matrix methods, however, these will not be practical for large systems unless we have methods to solve for all the non-zero eigenvalues and their corresponding eigenvectors. With the Fourier transform methods we find that our solutions in the frequency domain is of the form

\begin{aligned}\tilde{u}_j(\omega) = \sum a_{kj} \delta( \omega - \omega_{kj} ),\end{aligned} \hspace{\stretch{1}}(1.63)

or in the time domain

\begin{aligned}u(t) = \sum a_{kj} e^{ - i \omega_{kj} t }.\end{aligned} \hspace{\stretch{1}}(1.64)

We assumed exactly this form of solution in class. The trial solution that we used in class factored out a phase shift from a_{kj} of the form of e^{ i q x_n }, but that doesn’t change the underling form of that assumed solution. We have however found a good justification for the trial solution we utilized.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

Addition of two one half spins

Posted by peeterjoot on March 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

In class an example of interacting spin was given where the Hamiltonian included a two spins dot product

\begin{aligned}H = \mathbf{S}_1 \cdot \mathbf{S}_2.\end{aligned} \hspace{\stretch{1}}(1.0.1)

The energy eigenvalues for this Hamiltonian were derived by using the trick to rewrite this in terms of just squared spin operators

\begin{aligned}H = \frac{(\mathbf{S}_1 + \mathbf{S}_2)^2 - \mathbf{S}_1^2 - \mathbf{S}_2^2}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

For each of these terms we can calculate the total energy eigenvalues from

\begin{aligned}\mathbf{S}^2 \Psi = \hbar^2 S (S + 1) \Psi,\end{aligned} \hspace{\stretch{1}}(1.0.3)

where S takes on the values of the total spin for the (possibly composite) spin operator. Thinking about the spin operators in their matrix representation, it’s not obvious to me that we can just add the total spins, so that if \mathbf{S}_1 and \mathbf{S}_2 are the spin operators for two respective particle, then the total system has a spin operator \mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2 (really \mathbf{S} = \mathbf{S}_1 \otimes I_2 + I_2 \otimes \mathbf{S}_2, since the respective spin operators only act on their respective particles).

Let’s develop a bit of intuition on this, by calculating the energy eigenvalues of \mathbf{S}_1 \cdot \mathbf{S}_2 using Pauli matrices.

First lets look at how each of the Pauli matrices operate on the S_z eigenvectors

\begin{aligned}\sigma_x {\left\lvert {+} \right\rangle} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}0 \\ 1 \end{bmatrix}= {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\sigma_x {\left\lvert {-} \right\rangle} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=\begin{bmatrix}1 \\ 0 \end{bmatrix}= {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}\sigma_y {\left\lvert {+} \right\rangle} = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}0 \\ i \end{bmatrix}= i {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4c)

\begin{aligned}\sigma_y {\left\lvert {-} \right\rangle} = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=\begin{bmatrix}-i \\ 0 \end{bmatrix}= -i {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4d)

\begin{aligned}\sigma_z {\left\lvert {+} \right\rangle} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}1 \\ 0 \end{bmatrix}= {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4e)

\begin{aligned}\sigma_z {\left\lvert {-} \right\rangle} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=-\begin{bmatrix}0 \\ 1 \end{bmatrix}= -{\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4f)

Summarizing, these are

\begin{aligned}\sigma_x {\left\lvert {\pm} \right\rangle} = {\left\lvert {\mp} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\sigma_y {\left\lvert {\pm} \right\rangle} = \pm i {\left\lvert {\mp} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}\sigma_z {\left\lvert {\pm} \right\rangle} = \pm {\left\lvert {\pm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5c)

For convienience let’s avoid any sort of direct product notation, with the composite operations defined implicitly by

\begin{aligned}\left( S_{1k} \otimes S_{2k} \right)\left( {\left\lvert {\alpha} \right\rangle} \otimes {\left\lvert {\beta} \right\rangle}  \right)=S_{1k} S_{2k} {\left\lvert {\alpha \beta} \right\rangle}=\left( S_{1k} {\left\lvert {\alpha} \right\rangle}  \right) \otimes\left( S_{2k} {\left\lvert {\beta} \right\rangle}  \right).\end{aligned} \hspace{\stretch{1}}(1.0.6)

Now let’s compute all the various operations

\begin{aligned}\begin{aligned}\sigma_{1x} \sigma_{2x} {\left\lvert {++} \right\rangle} &= {\left\lvert {--} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {--} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {+-} \right\rangle} &= {\left\lvert {-+} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {-+} \right\rangle} &= {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}\begin{aligned}\sigma_{1y} \sigma_{2y} {\left\lvert {++} \right\rangle} &= i^2 {\left\lvert {--} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {--} \right\rangle} &= (-i)^2 {\left\lvert {++} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {+-} \right\rangle} &= i (-i) {\left\lvert {-+} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {-+} \right\rangle} &= (-i) i {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7b)

\begin{aligned}\begin{aligned}\sigma_{1z} \sigma_{2z} {\left\lvert {++} \right\rangle} &= (-1)^2 {\left\lvert {--} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {--} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {+-} \right\rangle} &= -{\left\lvert {-+} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {-+} \right\rangle} &= -{\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7c)

Tabulating first the action of the sum of the x and y operators we have

\begin{aligned}\begin{aligned}\left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y}  \right) {\left\lvert {++} \right\rangle} &= 0 \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y}  \right) {\left\lvert {--} \right\rangle} &= 0 \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y}  \right) {\left\lvert {+-} \right\rangle} &= 2 {\left\lvert {-+} \right\rangle} \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y}  \right) {\left\lvert {-+} \right\rangle} &= 2 {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.8)

so that

\begin{aligned}\begin{aligned}\mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {++} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {--} \right\rangle} &= {\left\lvert {--} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {+-} \right\rangle} &= 2 {\left\lvert {-+} \right\rangle} - {\left\lvert {+-} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {-+} \right\rangle} &= 2 {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.9)

Now we are set to write out the Hamiltonian matrix. Doing this with respect to the basis \beta = \{ {\left\lvert {++} \right\rangle}, {\left\lvert {--} \right\rangle}, {\left\lvert {+-} \right\rangle}, {\left\lvert {-+} \right\rangle} \}, we have

\begin{aligned}H &= \mathbf{S}_1 \cdot \mathbf{S}_2 \\ &= \frac{\hbar^2}{4} \begin{bmatrix}\left\langle ++ \right\rvert H \left\lvert ++ \right\rangle & \left\langle ++ \right\rvert H \left\lvert -- \right\rangle & \left\langle ++ \right\rvert H \left\lvert +- \right\rangle & \left\langle ++ \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle -- \right\rvert H \left\lvert ++ \right\rangle & \left\langle -- \right\rvert H \left\lvert -- \right\rangle & \left\langle -- \right\rvert H \left\lvert +- \right\rangle & \left\langle -- \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle +- \right\rvert H \left\lvert ++ \right\rangle & \left\langle +- \right\rvert H \left\lvert -- \right\rangle & \left\langle +- \right\rvert H \left\lvert +- \right\rangle & \left\langle +- \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle -+ \right\rvert H \left\lvert ++ \right\rangle & \left\langle -+ \right\rvert H \left\lvert -- \right\rangle & \left\langle -+ \right\rvert H \left\lvert +- \right\rangle & \left\langle -+ \right\rvert H \left\lvert -+ \right\rangle \end{bmatrix} \\ &= \frac{\hbar^2}{4} \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 2 \\ 0 & 0 & 2 & -1 \end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.10)

Two of the eigenvalues we can read off by inspection, and for the other two need to solve

\begin{aligned}0 =\begin{vmatrix}-\hbar^2/4 - \lambda & \hbar^2/2 \\ \hbar^2/2 & -\hbar^2/4 - \lambda\end{vmatrix}= (\hbar^2/4 + \lambda)^2 - (\hbar^2/2)^2\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\lambda = -\frac{\hbar^2}{4} \pm \frac{\hbar^2}{2} = \frac{\hbar^2}{4}, -\frac{3 \hbar^2}{4}.\end{aligned} \hspace{\stretch{1}}(1.0.12)

These are the last of the triplet energy eigenvalues and the singlet value that we expected from the spin addition method. The eigenvectors for the \hbar^2/4 eigenvalue is given by the solution of

\begin{aligned}0 =\frac{\hbar^2}{2}\begin{bmatrix}-1 & 1 \\ 1 & -1\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.13)

So the eigenvector is

\begin{aligned}\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right)\end{aligned} \hspace{\stretch{1}}(1.0.14)

For our -3\hbar^2/4 eigenvalue we seek

\begin{aligned}0 =\frac{\hbar^2}{2}\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.15)

So the eigenvector is

\begin{aligned}\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle} \right)\end{aligned} \hspace{\stretch{1}}(1.0.16)

An orthonormal basis with respective eigenvalues \hbar^2/4 (\times 3), -3\hbar^2/4 is thus given by

\begin{aligned}\beta' = \left\{{\left\lvert {++} \right\rangle},{\left\lvert {--} \right\rangle},\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right),\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle} \right)\right\}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

Confirmation of spin additivity.

Let’s use this to confirm that for H = (\mathbf{S}_1 + \mathbf{S}_2)^2, the two spin 1/2 particles have a combined spin given by

\begin{aligned}S(S + 1) \hbar^2.\end{aligned} \hspace{\stretch{1}}(1.0.18)

With

\begin{aligned}(\mathbf{S}_1 + \mathbf{S}_2)^2 = \mathbf{S}_1^2 + \mathbf{S}_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2,\end{aligned} \hspace{\stretch{1}}(1.0.19)

we have for the \hbar^2/4 energy eigenstate of \mathbf{S}_1 \cdot \mathbf{S}_2

\begin{aligned}2 \hbar^2 \frac{1}{{2}} \left( 1 + \frac{1}{{2}}  \right) + 2 \frac{\hbar^2}{4} = 2 \hbar^2,\end{aligned} \hspace{\stretch{1}}(1.0.20)

and for the -3\hbar^2/4 energy eigenstate of \mathbf{S}_1 \cdot \mathbf{S}_2

\begin{aligned}2 \hbar^2 \frac{1}{{2}} \left( 1 + \frac{1}{{2}}  \right) + 2 \left( - \frac{3 \hbar^2}{4}  \right) = 0.\end{aligned} \hspace{\stretch{1}}(1.0.21)

We get the 2 \hbar^2 and 0 eigenvalues respectively as expected.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 4 (Taught by Prof J.E. Sipe). Time independent perturbation theory (continued)

Posted by peeterjoot on September 23, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Time independent perturbation.

The setup

To recap, we were covering the time independent perturbation methods from section 16.1 of the text [1]. We start with a known Hamiltonian H_0, and alter it with the addition of a “small” perturbation

\begin{aligned}H = H_0 + \lambda H', \qquad \lambda \in [0,1]\end{aligned} \hspace{\stretch{1}}(2.1)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is known

\begin{aligned}H_0 {\lvert {{\psi_0}^{(0)}} \rangle} = {E_s}^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

We seek the perturbed eigensolution

\begin{aligned}H {\lvert {\psi_s} \rangle} = E_s {\lvert {\psi_s} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

and assumed a perturbative series representation for the energy eigenvalues in the new system

\begin{aligned}E_s = {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots\end{aligned} \hspace{\stretch{1}}(2.4)

Given an assumed representation for the new eigenkets in terms of the known basis

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n c_{ns} {\lvert {{\psi_n}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.5)

and a pertubative series representation for the probability coefficients

\begin{aligned}c_{ns} = {c_{ns}}^{(0)} + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)},\end{aligned} \hspace{\stretch{1}}(2.6)

so that

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n {c_{ns}}^{(0)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.7)

Setting \lambda = 0 requires

\begin{aligned}{c_{ns}}^{(0)} = \delta_{ns},\end{aligned} \hspace{\stretch{1}}(2.8)

for

\begin{aligned}\begin{aligned}{\lvert {\psi_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots \\ &=\left(1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots\right){\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.9)

We rescaled our kets

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {\bar{c}_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}{\bar{c}_{ns}}^{(j)} = \frac{{c_{ns}}^{(j)}}{1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots}\end{aligned} \hspace{\stretch{1}}(2.11)

The normalization of the rescaled kets is then

\begin{aligned}\left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle =1+ \lambda^2\sum_{n \ne s} {\left\lvert{{\bar{c}_{ns}}^{(1)}}\right\rvert}^2+\cdots\equiv \frac{1}{{Z_s}},\end{aligned} \hspace{\stretch{1}}(2.12)

One can then construct a renormalized ket if desired

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle}_R = Z_s^{1/2} {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.13)

so that

\begin{aligned}({\lvert {\bar{\psi}_s} \rangle}_R)^\dagger {\lvert {\bar{\psi}_s} \rangle}_R = Z_s \left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle = 1.\end{aligned} \hspace{\stretch{1}}(2.14)

The meat.

That’s as far as we got last time. We continue by renaming terms in 2.10

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}{\lvert {{\psi_n}^{(j)}} \rangle} = \sum_{n \ne s} {\bar{c}_{ns}}^{(j)} {\lvert {{\psi_s}^{(0)}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.16)

Now we act on this with the Hamiltonian

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} = E_s {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.17)

or

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} - E_s {\lvert {\bar{\psi}_s} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.18)

Expanding this, we have

\begin{aligned}\begin{aligned}&(H_0 + \lambda H') \left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right) \\ &\quad - \left( {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots \right)\left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right)= 0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.19)

We want to write this as

\begin{aligned}{\lvert {A} \rangle} + \lambda {\lvert {B} \rangle} + \lambda^2 {\lvert {C} \rangle} + \cdots = 0.\end{aligned} \hspace{\stretch{1}}(2.20)

This is

\begin{aligned}\begin{aligned}0 &=\lambda^0(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle}  \\ &+ \lambda\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &+ \lambda^2\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &\cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.21)

So we form

\begin{aligned}{\lvert {A} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {B} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {C} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

and so forth.

Zeroth order in \lambda

Since H_0 {\lvert {{\psi_s}^{(0)}} \rangle} = E_s^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}, this first condition on {\lvert {A} \rangle} is not much more than a statement that 0 - 0 = 0.

First order in \lambda

How about {\lvert {B} \rangle} = 0? For this to be zero we require that both of the following are simultaneously zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{B}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{B}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.25)

This first condition is

\begin{aligned}{\langle {{\psi_s}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.27)

With

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(0)}} \rangle} \equiv {H_{ms}}',\end{aligned} \hspace{\stretch{1}}(2.28)

or

\begin{aligned}{H_{ss}}' = E_s^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.29)

From the second condition we have

\begin{aligned}0 = {\langle {{\psi_m}^{(0)}} \rvert} (H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +{\langle {{\psi_m}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.30)

Utilizing the Hermitian nature of H_0 we can act backwards on {\langle {{\psi_m}^{(0)}} \rvert}

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H_0=E_m^{(0)} {\langle {{\psi_m}^{(0)}} \rvert}.\end{aligned} \hspace{\stretch{1}}(2.31)

We note that \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle = 0, m \ne s. We can also expand the \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle, which is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle &={\langle {{\psi_m}^{(0)}} \rvert}\left(\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ \end{aligned}

I found that reducing this sum wasn’t obvious until some actual integers were plugged in. Suppose that s = 3, and m = 5, then this is

\begin{aligned}\left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_3}^{(1)}}}\right\rangle &={\langle {{\psi_5}^{(0)}} \rvert}\left(\sum_{n = 0, 1, 2, 4, 5, \cdots} {\bar{c}_{n3}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ &={\bar{c}_{53}}^{(1)} \left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_5}^{(0)}}}\right\rangle \\ &={\bar{c}_{53}}^{(1)}.\end{aligned}

More generally that is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle ={\bar{c}_{ms}}^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.32)

Utilizing this gives us

\begin{aligned}0 = ( E_m^{(0)} - E_s^{(0)}) {\bar{c}_{ms}}^{(1)}+{H_{ms}}' \end{aligned} \hspace{\stretch{1}}(2.33)

And summarizing what we learn from our {\lvert {B} \rangle} = 0 conditions we have

\begin{aligned}E_s^{(1)} &= {H_{ss}}' \\ {\bar{c}_{ms}}^{(1)}&=\frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.34)

Second order in \lambda

Doing the same thing for {\lvert {C} \rangle} = 0 we form (or assume)

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{C}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.36)

\begin{aligned}0 &= \left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle  \\ &={\langle {{\psi_s}^{(0)}} \rvert}\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle}  \right) \\ &=(E_s^{(0)} - E_s^{(0)}) \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(2)}}}\right\rangle +{\langle {{\psi_s}^{(0)}} \rvert}(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle \end{aligned}

We need to know what the \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle is, and find that it is zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle={\langle {{\psi_s}^{(0)}} \rvert}\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.38)

Again, suppose that s = 3. Our sum ranges over all n \ne 3, so all the brakets are zero. Utilizing that we have

\begin{aligned}E_s^{(2)} &={\langle {{\psi_s}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(1)}} \rangle}  \\ &={\langle {{\psi_s}^{(0)}} \rvert} H' \sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {\lvert {{\psi_m}^{(0)}} \rangle} \\ &=\sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {H_{sm}}'\end{aligned}

From 2.34 we have

\begin{aligned}E_s^{(2)} =\sum_{m \ne s} \frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }{H_{sm}}'=\sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.39)

We can now summarize by forming the first order terms of the perturbed energy and the corresponding kets

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_m}^{(0)}} \rangle}+ \cdots\end{aligned} \hspace{\stretch{1}}(2.40)

We can continue calculating, but are hopeful that we can stop the calculation without doing more work, even if \lambda = 1. If one supposes that the

\begin{aligned}\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } \end{aligned} \hspace{\stretch{1}}(2.42)

term is “small”, then we can hope that truncating the sum will be reasonable for \lambda = 1. This would be the case if

\begin{aligned}{H_{ms}}' \ll {\left\lvert{ E_s^{(0)} - E_m^{(0)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.43)

however, to put some mathematical rigor into making a statement of such smallness takes a lot of work. We are referred to [2]. Incidentally, these are loosely referred to as the first and second testaments, because of the author’s name, and the fact that they came as two volumes historically.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one. Dover Publications New York, 1999.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

PHY456H1F, Quantum Mechanics II. My solutions to problem set 1 (ungraded).

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Harmonic oscillator.

Consider

\begin{aligned}H_0 = \frac{P^2}{2m} + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(1.1)

Since it’s been a while let’s compute the raising and lowering factorization that was used so extensively for this problem.

It was of the form

\begin{aligned}H_0 = (a X - i b P)(a X + i b P) + \cdots\end{aligned} \hspace{\stretch{1}}(1.2)

Why this factorization has an imaginary in it is a good question. It’s not one that is given any sort of rationale in the text ([1]).

It’s clear that we want a = \sqrt{m/2} \omega and b = 1/\sqrt{2m}. The difference is then

\begin{aligned}H_0 - (a X - i b P)(a X + i b P)=- i a b \left[{X},{P}\right]  = - i \frac{\omega}{2} \left[{X},{P}\right]\end{aligned} \hspace{\stretch{1}}(1.3)

That commutator is an i\hbar value, but what was the sign? Let’s compute so we don’t get it wrong

\begin{aligned}\left[{x},{ p}\right] \psi&= -i \hbar \left[{x},{\partial_x}\right] \psi \\ &= -i \hbar ( x \partial_x \psi - \partial_x (x \psi) ) \\ &= -i \hbar ( - \psi ) \\ &= i \hbar \psi\end{aligned}

So we have

\begin{aligned}H_0 =\left(\omega \sqrt{\frac{m}{2}} X - i \sqrt{\frac{1}{2m}} P\right)\left(\omega \sqrt{\frac{m}{2}} X + i \sqrt{\frac{1}{2m}} P\right)+ \frac{\hbar \omega}{2}\end{aligned} \hspace{\stretch{1}}(1.4)

Factoring out an \hbar \omega produces the form of the Hamiltonian that we used before

\begin{aligned}H_0 =\hbar \omega \left(\left(\sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P\right)\left(\sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P\right)+ \frac{1}{{2}}\right).\end{aligned} \hspace{\stretch{1}}(1.5)

The factors were labeled the uppering (a^\dagger) and lowering (a) operators respectively, and written

\begin{aligned}H_0 &= \hbar \omega \left( a^\dagger a + \frac{1}{{2}} \right) \\ a &= \sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P.\end{aligned} \hspace{\stretch{1}}(1.6)

Observe that we can find the inverse relations

\begin{aligned}X &= \sqrt{ \frac{\hbar}{2 m \omega} } \left( a + a^\dagger \right) \\ P &= i \sqrt{ \frac{m \hbar \omega}{2} } \left( a^\dagger  - a \right)\end{aligned} \hspace{\stretch{1}}(1.9)

Question
What is a good reason that we chose this particular factorization? For example, a quick computation shows that we could have also picked

\begin{aligned}H_0 = \hbar \omega \left( a a^\dagger - \frac{1}{{2}} \right).\end{aligned} \hspace{\stretch{1}}(1.11)

I don’t know that answer. That said, this second factorization is useful in that it provides the commutator relation between the raising and lowering operators, since subtracting 1.11 and 1.6 yields

\begin{aligned}\left[{a},{a^\dagger}\right] = 1.\end{aligned} \hspace{\stretch{1}}(1.12)

If we suppose that we have eigenstates for the operator a^\dagger a of the form

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

then the problem of finding the eigensolution of H_0 reduces to solving this problem. Because a^\dagger a commutes with 1/2, an eigenstate of a^\dagger a is also an eigenstate of H_0. Utilizing 1.12 we then have

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} )&= (a a^\dagger - 1 ) a {\lvert {n} \rangle} \\ &= a (a^\dagger a - 1 ) {\lvert {n} \rangle} \\ &= a (\lambda_n - 1 ) {\lvert {n} \rangle} \\ &= (\lambda_n - 1 ) a {\lvert {n} \rangle},\end{aligned}

so we see that a {\lvert {n} \rangle} is an eigenstate of a^\dagger a with eigenvalue \lambda_n - 1.

Similarly for the raising operator

\begin{aligned}a^\dagger a ( a^\dagger {\lvert {n} \rangle} )&=a^\dagger (a  a^\dagger) {\lvert {n} \rangle} ) \\ &=a^\dagger (a^\dagger a + 1) {\lvert {n} \rangle} ) \\ &=a^\dagger (\lambda_n + 1) {\lvert {n} \rangle} ),\end{aligned}

and find that a^\dagger {\lvert {n} \rangle} is also an eigenstate of a^\dagger a with eigenvalue \lambda_n + 1.

Supposing that there is a lowest energy level (because the potential V(x) = m \omega x^2 /2 has a lower bound of zero) then the state {\lvert {0} \rangle} for which the energy is the lowest when operated on by a we have

\begin{aligned}a {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.14)

Thus

\begin{aligned}a^\dagger a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(1.15)

and

\begin{aligned}\lambda_0 = 0.\end{aligned} \hspace{\stretch{1}}(1.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to \lambda_0 where up to this point 0 was just a label.

If the eigenvalue equation we are trying to solve for the Hamiltonian is

\begin{aligned}H_0 {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.17)

Then we must then have

\begin{aligned}E_n = \hbar \omega \left(\lambda_n + \frac{1}{{2}} \right) = \hbar \omega \left(n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.18)

Part (a)

We’ve now got enough context to attempt the first part of the question, calculation of

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.19)

We’ve calculated things like this before, such as

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{\hbar}{2 m \omega} {\langle {n} \rvert} (a + a^\dagger)^2 {\lvert {n} \rangle}\end{aligned}

To continue we need an exact relation between {\lvert {n} \rangle} and {\lvert {n \pm 1} \rangle}. Recall that a {\lvert {n} \rangle} was an eigenstate of a^\dagger a with eigenvalue n - 1. This implies that the eigenstates a {\lvert {n} \rangle} and {\lvert {n-1} \rangle} are proportional

\begin{aligned}a {\lvert {n} \rangle} = c_n {\lvert {n - 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.20)

or

\begin{aligned}{\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} &= {\left\lvert{c_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{c_n}\right\rvert}^2 \\ n \left\langle{{n}} \vert {{n}}\right\rangle &= \\ n &=\end{aligned}

so that

\begin{aligned}a {\lvert {n} \rangle} = \sqrt{n} {\lvert {n - 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.21)

Similarly let

\begin{aligned}a^\dagger {\lvert {n} \rangle} = b_n {\lvert {n + 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)

or

\begin{aligned}{\langle {n} \rvert} a a^\dagger {\lvert {n} \rangle} &= {\left\lvert{b_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{b_n}\right\rvert}^2 \\ {\langle {n} \rvert} (1 + a^\dagger a) {\lvert {n} \rangle} &= \\ 1 + n &=\end{aligned}

so that

\begin{aligned}a^\dagger {\lvert {n} \rangle} = \sqrt{n+1} {\lvert {n + 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now return to 1.19, and find

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}&=\frac{\hbar^2}{4 m^2 \omega^2} {\langle {n} \rvert} (a + a^\dagger)^4 {\lvert {n} \rangle}\end{aligned}

Consider half of this braket

\begin{aligned}(a + a^\dagger)^2 {\lvert {n} \rangle}&=\left( a^2 + (a^\dagger)^2 + a^\dagger a + a a^\dagger \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + a^\dagger a + (1 + a^\dagger a) \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + 1 + 2 a^\dagger a \right) {\lvert {n} \rangle} \\ &=\sqrt{n-1}\sqrt{n-2} {\lvert {n-2} \rangle}+\sqrt{n+1}\sqrt{n+2} {\lvert {n + 2} \rangle}+{\lvert {n} \rangle}+  2 n {\lvert {n} \rangle}\end{aligned}

Squaring, utilizing the Hermitian nature of the X operator

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}=\frac{\hbar^2}{4 m^2 \omega^2}\left((n-1)(n-2) + (n+1)(n+2) + (1 + 2n)^2\right)=\frac{\hbar^2}{4 m^2 \omega^2}\left( 6 n^2 + 4 n + 5 \right)\end{aligned} \hspace{\stretch{1}}(1.24)

Part (b)

Find the ground state energy of the Hamiltonian H = H_0 + \gamma X^2 for \gamma > 0.

The new Hamiltonian has the form

\begin{aligned}H = \frac{P^2}{2m} + \frac{1}{{2}} m \left(\omega^2 + \frac{2 \gamma}{m} \right) X^2 =\frac{P^2}{2m} + \frac{1}{{2}} m {\omega'}^2 X^2,\end{aligned} \hspace{\stretch{1}}(1.25)

where

\begin{aligned}\omega' = \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.26)

The energy states of the Hamiltonian are thus

\begin{aligned}E_n = \hbar \sqrt{ \omega^2 + \frac{2 \gamma}{m} } \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.27)

and the ground state of the modified Hamiltonian H is thus

\begin{aligned}E_0 = \frac{\hbar}{2} \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.28)

Part (c)

Find the ground state energy of the Hamiltonian H = H_0 - \alpha X.

With a bit of play, this new Hamiltonian can be factored into

\begin{aligned}H= \hbar \omega \left( b^\dagger b + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2}= \hbar \omega \left( b b^\dagger - \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2},\end{aligned} \hspace{\stretch{1}}(1.29)

where

\begin{aligned}b &= \sqrt{\frac{m \omega}{2\hbar}} X + \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }} \\ b^\dagger &= \sqrt{\frac{m \omega}{2\hbar}} X - \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }}.\end{aligned} \hspace{\stretch{1}}(1.30)

From 1.29 we see that we have the same sort of commutator relationship as in the original Hamiltonian

\begin{aligned}\left[{b},{b^\dagger}\right] = 1,\end{aligned} \hspace{\stretch{1}}(1.32)

and because of this, all the preceding arguments follow unchanged with the exception that the energy eigenstates of this Hamiltonian are shifted by a constant

\begin{aligned}H {\lvert {n} \rangle} = \left( \hbar \omega \left( n + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2} \right) {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.33)

where the {\lvert {n} \rangle} states are simultaneous eigenstates of the b^\dagger b operator

\begin{aligned}b^\dagger b {\lvert {n} \rangle} = n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.34)

The ground state energy is then

\begin{aligned}E_0 = \frac{\hbar \omega }{2} - \frac{\alpha^2}{2 m \omega^2}.\end{aligned} \hspace{\stretch{1}}(1.35)

This makes sense. A translation of the entire position of the system should not effect the energy level distribution of the system, but we have set our reference potential differently, and have this constant energy adjustment to the entire system.

Hydrogen atom and spherical harmonics.

We are asked to show that for any eigenkets of the hydrogen atom {\lvert {\Phi_{nlm}} \rangle} we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Y {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.36)

The summary sheet provides us with the wavefunction

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle = \frac{2}{n^2 a_0^{3/2}} \sqrt{\frac{(n-l-1)!}{(n+l)!)^3}} F_{nl}\left( \frac{2r}{n a_0} \right) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.37)

where F_{nl} is a real valued function defined in terms of Lagueere polynomials. Working with the expectation of the X operator to start with we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle {\langle {\mathbf{r}'} \rvert} X {\lvert {\mathbf{r}} \rangle} \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle \delta(\mathbf{r} - \mathbf{r}') r \sin\theta \cos\phi \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \Phi_{nlm}^{*}(\mathbf{r}) r \sin\theta \cos\phi \Phi_{nlm}(\mathbf{r}) d^3 \mathbf{r} \\ &\sim\int r^2 dr {\left\lvert{ F_{nl}\left(\frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \cos\phi Y_l^m(\theta, \phi) \\ \end{aligned}

Recalling that the only \phi dependence in Y_l^m is e^{i m \phi} we can perform the d\phi integration directly, which is

\begin{aligned}\int_{\phi=0}^{2\pi} \cos\phi d\phi e^{-i m \phi} e^{i m \phi} = 0.\end{aligned} \hspace{\stretch{1}}(2.38)

We have the same story for the Y expectation which is

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} \sim\int r^2 dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \sin\phi Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.39)

Our \phi integral is then just

\begin{aligned}\int_{\phi=0}^{2\pi} \sin\phi d\phi e^{-i m \phi} e^{i m \phi} = 0,\end{aligned} \hspace{\stretch{1}}(2.40)

also zero. The Z expectation is a slightly different story. There we have

\begin{aligned}\begin{aligned}{\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle} &\sim\int dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r^3  \\ &\quad \int_0^{2\pi} d\phi\int_0^\pi \sin \theta d\theta\left( \sin\theta \right)^{-2m}\left( \frac{d^{l - m}}{d (\cos\theta)^{l-m}} \sin^{2l}\theta \right)^2\cos\theta.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.41)

Within this last integral we can make the substitution

\begin{aligned}u &= \cos\theta \\ \sin\theta d\theta &= - d(\cos\theta) = -du \\ u &\in [1, -1],\end{aligned} \hspace{\stretch{1}}(2.42)

and the integral takes the form

\begin{aligned}-\int_{-1}^1 (-du) \frac{1}{{(1 - u^2)^m}} \left( \frac{d^{l-m}}{d u^{l -m }} (1 - u^2)^l\right)^2 u.\end{aligned} \hspace{\stretch{1}}(2.45)

Here we have the product of two even functions, times one odd function (u), over a symmetric interval, so the end result is zero, completing the problem.

I wasn’t able to see how to exploit the parity result suggested in the problem, but it wasn’t so bad to show these directly.

Angular momentum operator.

Working with the appropriate expressions in Cartesian components, confirm that L_i {\lvert {\psi} \rangle} = 0 for each component of angular momentum L_i, if \left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi(\mathbf{r}) is in fact only a function of r = {\left\lvert{\mathbf{r}}\right\rvert}.

In order to proceed, we will have to consider a matrix element, so that we can operate on {\lvert {\psi} \rangle} in position space. For that matrix element, we can proceed to insert complete states, and reduce the problem to a question of wavefunctions. That is

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} L_i {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} \epsilon_{i a b} X_a P_b {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a {\langle {\mathbf{r}} \rvert} \frac{\partial {\psi(\mathbf{r}')}}{\partial {X_b}} {\lvert {\mathbf{r}'} \rangle}  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \left\langle{\mathbf{r}} \vert {{\mathbf{r}'}}\right\rangle  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \delta^3(\mathbf{r} - \mathbf{r}') \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(\mathbf{r})}}{\partial {x_b}} \end{aligned}

With \psi(\mathbf{r}) = \psi(r) we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(r)}}{\partial {x_b}}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {r}}{\partial {x_b}} \frac{d\psi(r)}{dr}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{1}{{2}} 2 x_b \frac{1}{{r}} \frac{d\psi(r)}{dr}  \\ \end{aligned}

We are left with an sum of a symmetric product x_a x_b with the antisymmetric tensor \epsilon_{i a b} so this is zero for all i \in [1,3].

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

Lorentz Force Trajectory.

Posted by peeterjoot on September 10, 2011

Solving the Lorentz force equation in the non-relativistic limit.

The problem.

[1] treats the solution of the Lorentz force equation in covariant form. Let’s try this for non-relativistic motion for constant fields, but without making the usual assumptions about perpendicular electric and magnetic fields, or alignment of the axis. Our equation to solve is

\begin{aligned}\frac{d}{dt} \left( \gamma m \mathbf{v} \right) = q \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.1)

so in the non-relativistic limit we want to solve the matrix equation

\begin{aligned}\mathbf{v}' &= \frac{q}{m} \mathbf{E} + \frac{q}{ m c } \Omega \mathbf{v} \\ \Omega &=\begin{bmatrix}0 & B_3 & -B_2 \\ -B_3 & 0 & B_1 \\ B_2 & -B_1 & 0 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2)

First attempt.

This is very much like the plain old LDE

\begin{aligned}x' = a + b x,\end{aligned} \hspace{\stretch{1}}(1.4)

which we can solve using integrating factors

\begin{aligned}x' - b x &= a \\ e^{b t} \left( x e^{-b t} \right)' &=\end{aligned}

This we can rearrange and integrate to find a solution to the non-homogeneous problem

\begin{aligned}x e^{-b t} = \int a e^{-b t}.\end{aligned} \hspace{\stretch{1}}(1.5)

This solution to the non-homogeneous equation is thus

\begin{aligned}x - x_0 = e^{b t} \int_{\tau = 0}^t a e^{-b \tau} = \frac{a}{b} \left(e^{bt} - 1 \right).\end{aligned} \hspace{\stretch{1}}(1.6)

Because this already incorporates the homogeneous solution x = C e^{b t}, this is also the general form of the solution.

Can we do something similar for the matrix equation of 1.2? It is tempting to try, rearranging in the same way like so

\begin{aligned}e^{ \frac{q}{m c}\Omega t} \left( e^{-\frac{q}{m c} \Omega t} \mathbf{v} \right)' = \frac{q}{m} \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.7)

Our matrix exponentials are perfectly well formed, but we will run into trouble attempting this. We can get as far as the integral above before running into trouble

\begin{aligned}\mathbf{v} - \mathbf{v}_0 = \frac{q}{m}e^{ \frac{q }{m c} \Omega t}\left( \int_{\tau = 0}^te^{ -\frac{q }{m c} \Omega \tau}\right) \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.8)

Only when \text{Det}{\Omega} \ne 0 do we have

\begin{aligned}\int e^{ -\frac{q }{m c} \Omega \tau} =- \frac{m c}{q} \Omega^{-1} e^{ -\frac{q }{m c} \Omega \tau},\end{aligned} \hspace{\stretch{1}}(1.9)

but in our case, this determinant is zero, due to the antisymmetry that is built into our magnetic field tensor. It appears that we need a different strategy.

Second attempt.

It’s natural to attempt to pull out our spectral theorem toolbox. We find three independent eigenvalues for our matrix \Omega (one of which is naturally zero due to the singular nature of the matrix).

These eigenvalues are

\begin{aligned}\lambda_1 &= 0 \\ \lambda_2 &= i{\left\lvert{\mathbf{B}}\right\rvert} \\ \lambda_3 &= -i{\left\lvert{\mathbf{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(1.10)

The corresponding orthonormal eigenvectors are found to be

\begin{aligned}\mathbf{u}_1 &= \frac{\mathbf{B}}{{\left\lvert{\mathbf{B}}\right\rvert}} \\ \mathbf{u}_2 &=\frac{1}{{{\left\lvert{\mathbf{B}}\right\rvert} \sqrt{ 2(B_1^2 + B_3^2) } }}\begin{bmatrix}i B_1 B_2 - B_3 {\left\lvert{\mathbf{B}}\right\rvert} \\ -i(B_1^2 + B_3^2) \\ i B_2 B_3 + B_1 {\left\lvert{\mathbf{B}}\right\rvert} \\ \end{bmatrix} \\ \mathbf{u}_3 &=\frac{1}{{{\left\lvert{\mathbf{B}}\right\rvert} \sqrt{ 2(B_1^2 + B_3^2) } }}\begin{bmatrix}-i B_1 B_2 - B_3 {\left\lvert{\mathbf{B}}\right\rvert} \\ i(B_1^2 + B_3^2) \\ -i B_2 B_3 + B_1 {\left\lvert{\mathbf{B}}\right\rvert} \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.13)

The last pair of eigenvectors are computed with the assumption that not both of B_1 and B_3 are zero. This allows for the spectral decomposition

\begin{aligned}U &=\begin{bmatrix}\mathbf{u}_1 & \mathbf{u}_2 & \mathbf{u}_3\end{bmatrix} \\ D &= \begin{bmatrix}0 & 0 & 0 \\ 0 & i {\left\lvert{\mathbf{B}}\right\rvert} & 0 \\ 0 & 0 & -i {\left\lvert{\mathbf{B}}\right\rvert}\end{bmatrix} \\ \Omega &= U D U^{*}\end{aligned} \hspace{\stretch{1}}(1.16)

We can use this to decouple our equation

\begin{aligned}\mathbf{v}' = \frac{q}{m} \mathbf{E} + \frac{q}{ m c } U D U^{*} \mathbf{v}\end{aligned} \hspace{\stretch{1}}(1.19)

forming instead

\begin{aligned}\mathbf{w} &= U^{*} \mathbf{v} \\ \mathbf{F} &= U^{*} \mathbf{E} \\ \mathbf{w}' &= \frac{q}{m} \mathbf{F} + \frac{q}{ m c } D \mathbf{w}.\end{aligned} \hspace{\stretch{1}}(1.20)

Written out explicitly, this is a set of three independent equations

\begin{aligned}w_1' &= \frac{q}{m} F_1 \\ w_2' &= \frac{q}{m} F_2 + \frac{q i {\left\lvert{\mathbf{B}}\right\rvert}}{ m c } w_2 \\ w_3' &= \frac{q}{m} F_3 - \frac{q i {\left\lvert{\mathbf{B}}\right\rvert}}{ m c } w_3\end{aligned} \hspace{\stretch{1}}(1.23)

Utilizing 1.6 our solution is

\begin{aligned}w_1 - w_1(0) &= \frac{q}{m} F_1 t \\ w_2 - w_2(0) &= - \frac{i c F_2 }{{\left\lvert{\mathbf{B}}\right\rvert}} \left( e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ m c } } - 1 \right) \\ w_3 - w_3(0) &= \frac{i c F_3 }{{\left\lvert{\mathbf{B}}\right\rvert}} \left( e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ m c } } - 1 \right)\end{aligned} \hspace{\stretch{1}}(1.26)

Reinserting matrix form we have

\begin{aligned}\mathbf{w} - \mathbf{w}(0) =\begin{bmatrix}\frac{q}{m} \mathbf{e}_1^\text{T} U^{*} \mathbf{E} t \\ \frac{2 c \mathbf{e}_2^\text{T} U^{*} \mathbf{E} }{{\left\lvert{\mathbf{B}}\right\rvert}}e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ \frac{2 c \mathbf{e}_3^\text{T} U^{*} \mathbf{E} }{{\left\lvert{\mathbf{B}}\right\rvert}}e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.29)

with

\begin{aligned}f_1 &= \frac{q}{m} t \\ f_2 &= \frac{2 c }{{\left\lvert{\mathbf{B}}\right\rvert}} e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ f_3 &= \frac{2 c }{{\left\lvert{\mathbf{B}}\right\rvert}} e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right)\end{aligned} \hspace{\stretch{1}}(1.30)

We have

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = U\begin{bmatrix}f_1 \mathbf{e}_1^\text{T} U^{*} \mathbf{E} \\ f_2 \mathbf{e}_2^\text{T} U^{*} \mathbf{E} \\ f_3 \mathbf{e}_3^\text{T} U^{*} \mathbf{E} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.33)

Observe that the dot products embedded here can be nicely expressed in terms of the eigenvectors since

\begin{aligned}U^{*} \mathbf{E}= \begin{bmatrix}\mathbf{u}_1^{*} \cdot \mathbf{E} \\ \mathbf{u}_2^{*} \cdot \mathbf{E} \\ \mathbf{u}_3^{*} \cdot \mathbf{E}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.34)

Our solution is thus a weighted sum of projections of the electric field vector \mathbf{E} onto the eigenvectors formed strictly from the magnetic field tensor

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \sum f_i \mathbf{u}_i ( \mathbf{u}_i^{*} \cdot \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(1.35)

Recalling that \mathbf{u}_i = \mathbf{B}/{\left\lvert{\mathbf{B}}\right\rvert}, the unit vector that lies in the direction of the magnetic field, we have

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \frac{q t}{m} \hat{\mathbf{B}} (\hat{\mathbf{B}} \cdot \mathbf{E})+ \sum_{i=2}^3 f_i \mathbf{u}_i ( \mathbf{u}_i^{*} \cdot \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(1.36)

Also observe that this is a manifestly real valued solution since remaining eigenvectors are conjugate pairs \mathbf{u}_2 = \mathbf{u}_3^{*} as are the differential solutions f_2 = f_3^{*}. This leaves us with

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \frac{q t}{m} \hat{\mathbf{B}} (\hat{\mathbf{B}} \cdot \mathbf{E})+ \frac{4 c }{{\left\lvert{\mathbf{B}}\right\rvert}} \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \text{Real} \left(e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \mathbf{u}_2 ( \mathbf{u}_2^{*} \cdot \mathbf{E} )\right).\end{aligned} \hspace{\stretch{1}}(1.37)

It is natural to express \mathbf{u}_2 in terms of the direction cosines b_i of the magnetic field vector \mathbf{B} = {\left\lvert{\mathbf{B}}\right\rvert}(b_1, b_2, b_3)

\begin{aligned}\mathbf{u}_2 =\frac{1}{{\sqrt{ 2(b_1^2 + b_3^2) }}}\begin{bmatrix}i b_1 b_2 - b_3 \\ -i(b_1^2 + b_3^2) \\ i b_2 b_3 + b_1 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.38)

Is there a way to express this beastie in a coordinate free fashion? I experimented with this a bit looking for something that would provide some additional geometrical meaning but did not find it. Further play with that is probably something for another day.

What we do see is that our velocity has two main components, one of which is linearly increasing in proportion to the colinearity of the magnetic and electric fields. The other component is oscillatory. With a better geometrical description of that eigenvector we could perhaps understand the mechanics a bit better.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S (relativistic electrodynamics) Problem Set 3.

Posted by peeterjoot on March 2, 2011

[Click here for a PDF of this post with nicer formatting]

Disclaimer.

This problem set is as yet ungraded (although only the second question will be graded).

Problem 1. Fun with \epsilon_{\alpha\beta\gamma}, \epsilon^{ijkl}, F_{ij}, and the duality of Maxwell’s equations in vacuum.

1. Statement. rank 3 spatial antisymmetric tensor identities.

Prove that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}\end{aligned} \hspace{\stretch{1}}(2.1)

and use it to find the familiar relation for

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})\end{aligned} \hspace{\stretch{1}}(2.2)

Also show that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=2 \delta_{\alpha\mu}.\end{aligned} \hspace{\stretch{1}}(2.3)

(Einstein summation implied all throughout this problem).

1. Solution

We can explicitly expand the (implied) sum over indexes \gamma. This is

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\epsilon_{\alpha \beta 1} \epsilon_{\mu \nu 1}+\epsilon_{\alpha \beta 2} \epsilon_{\mu \nu 2}+\epsilon_{\alpha \beta 3} \epsilon_{\mu \nu 3}\end{aligned} \hspace{\stretch{1}}(2.4)

For any \alpha \ne \beta only one term is non-zero. For example with \alpha,\beta = 2,3, we have just a contribution from the \gamma = 1 part of the sum

\begin{aligned}\epsilon_{2 3 1} \epsilon_{\mu \nu 1}.\end{aligned} \hspace{\stretch{1}}(2.5)

The value of this for (\mu,\nu) = (\alpha,\beta) is

\begin{aligned}(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.6)

whereas for (\mu,\nu) = (\beta,\alpha) we have

\begin{aligned}-(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.7)

Our sum has value one when (\alpha, \beta) matches (\mu, \nu), and value minus one for when (\mu, \nu) are permuted. We can summarize this, by saying that when \alpha \ne \beta we have

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}.}\end{aligned} \hspace{\stretch{1}}(2.8)

However, observe that when \alpha = \beta the RHS is

\begin{aligned}\delta_{\alpha\mu} \delta_{\alpha\nu}-\delta_{\alpha\nu} \delta_{\alpha\mu} = 0,\end{aligned} \hspace{\stretch{1}}(2.9)

as desired, so this form works in general without any \alpha \ne \beta qualifier, completing this part of the problem.

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})&=(\epsilon_{\alpha \beta \gamma} \mathbf{e}^\alpha A^\beta B^\gamma ) \cdot(\epsilon_{\mu \nu \sigma} \mathbf{e}^\mu C^\nu D^\sigma ) \\ &=\epsilon_{\alpha \beta \gamma} A^\beta B^\gamma\epsilon_{\alpha \nu \sigma} C^\nu D^\sigma \\ &=(\delta_{\beta \nu} \delta_{\gamma\sigma}-\delta_{\beta \sigma} \delta_{\gamma\nu} )A^\beta B^\gammaC^\nu D^\sigma \\ &=A^\nu B^\sigmaC^\nu D^\sigma-A^\sigma B^\nuC^\nu D^\sigma.\end{aligned}

This gives us

\begin{aligned}\boxed{(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})=(\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D})-(\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C}).}\end{aligned} \hspace{\stretch{1}}(2.10)

We have one more identity to deal with.

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}\end{aligned} \hspace{\stretch{1}}(2.11)

We can expand out this (implied) sum slow and dumb as well

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}&=\epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+\epsilon_{\alpha 2 1} \epsilon_{\mu 2 1} \\ &+\epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+\epsilon_{\alpha 3 1} \epsilon_{\mu 3 1} \\ &+\epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}+\epsilon_{\alpha 3 2} \epsilon_{\mu 3 2} \\ &=2 \epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+ 2 \epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+ 2 \epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}\end{aligned}

Now, observe that for any \alpha \in (1,2,3) only one term of this sum is picked up. For example, with no loss of generality, pick \alpha = 1. We are left with only

\begin{aligned}2 \epsilon_{1 2 3} \epsilon_{\mu 2 3}\end{aligned} \hspace{\stretch{1}}(2.12)

This has the value

\begin{aligned}2 (\epsilon_{1 2 3})^2 = 2\end{aligned} \hspace{\stretch{1}}(2.13)

when \mu = \alpha and is zero otherwise. We can therefore summarize the evaluation of this sum as

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=  2\delta_{\alpha\mu},}\end{aligned} \hspace{\stretch{1}}(2.14)

completing this problem.

2. Statement. Determinant of three by three matrix.

Prove that for any 3 \times 3 matrix {\left\lVert{A_{\alpha\beta}}\right\rVert}: \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = \epsilon_{\alpha \beta \gamma} \text{Det} A and that \epsilon_{\alpha\beta\gamma} \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = 6 \text{Det} A.

2. Solution

In class Simon showed us how the first identity can be arrived at using the triple product \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \text{Det}(\mathbf{a} \mathbf{b} \mathbf{c}). It occurred to me later that I’d seen the identity to be proven in the context of Geometric Algebra, but hadn’t recognized it in this tensor form. Basically, a wedge product can be expanded in sums of determinants, and when the dimension of the space is the same as the vector, we have a pseudoscalar times the determinant of the components.

For example, in \mathbb{R}^{2}, let’s take the wedge product of a pair of vectors. As preparation for the relativistic \mathbb{R}^{4} case We won’t require an orthonormal basis, but express the vector in terms of a reciprocal frame and the associated components

\begin{aligned}a = a^i e_i = a_j e^j\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.16)

When we get to the relativistic case, we can pick (but don’t have to) the standard basis

\begin{aligned}e_0 &= (1, 0, 0, 0) \\ e_1 &= (0, 1, 0, 0) \\ e_2 &= (0, 0, 1, 0) \\ e_3 &= (0, 0, 0, 1),\end{aligned} \hspace{\stretch{1}}(2.17)

for which our reciprocal frame is implicitly defined by the metric

\begin{aligned}e^0 &= (1, 0, 0, 0) \\ e^1 &= (0, -1, 0, 0) \\ e^2 &= (0, 0, -1, 0) \\ e^3 &= (0, 0, 0, -1).\end{aligned} \hspace{\stretch{1}}(2.21)

Anyways. Back to the problem. Let’s examine the \mathbb{R}^{2} case. Our wedge product in coordinates is

\begin{aligned}a \wedge b=a^i b^j (e_i \wedge e_j)\end{aligned} \hspace{\stretch{1}}(2.25)

Since there are only two basis vectors we have

\begin{aligned}a \wedge b=(a^1 b^2 - a^2 b^1) e_1 \wedge e_2 = \text{Det} {\left\lVert{a^i b^j}\right\rVert} (e_1 \wedge e_2).\end{aligned} \hspace{\stretch{1}}(2.26)

Our wedge product is a product of the determinant of the vector coordinates, times the \mathbb{R}^{2} pseudoscalar e_1 \wedge e_2.

This doesn’t look quite like the \mathbb{R}^{3} relation that we want to prove, which had an antisymmetric tensor factor for the determinant. Observe that we get the determinant by picking off the e_1 \wedge e_2 component of the bivector result (the only component in this case), and we can do that by dotting with e^2 \cdot e^1. To get an antisymmetric tensor times the determinant, we have only to dot with a different pseudoscalar (one that differs by a possible sign due to permutation of the indexes). That is

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)&=a^i b^j (e^t \wedge e^s) \cdot (e_i \wedge e_j) \\ &=a^i b^j\left( {\delta^{s}}_i {\delta^{t}}_j-{\delta^{t}}_i {\delta^{s}}_j  \right) \\ &=a^i b^j{\delta^{[t}}_j {\delta^{s]}}_i \\ &=a^i b^j{\delta^{t}}_{[j} {\delta^{s}}_{i]} \\ &=a^{[i} b^{j]}{\delta^{t}}_{j} {\delta^{s}}_{i} \\ &=a^{[s} b^{t]}\end{aligned}

Now, if we write a^i = A^{1 i} and b^j = A^{2 j} we have

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)=A^{1 s} A^{2 t} -A^{1 t} A^{2 s}\end{aligned} \hspace{\stretch{1}}(2.27)

We can write this in two different ways. One of which is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s} =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned} \hspace{\stretch{1}}(2.28)

and the other of which is by introducing free indexes for 1 and 2, and summing antisymmetrically over these. That is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s}=A^{a s} A^{b t} \epsilon_{a b}\end{aligned} \hspace{\stretch{1}}(2.29)

So, we have

\begin{aligned}\boxed{A^{a s} A^{b t} \epsilon_{a b} =A^{1 i} A^{2 j} {\delta^{[t}}_j {\delta^{s]}}_i =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert},}\end{aligned} \hspace{\stretch{1}}(2.30)

This result hold regardless of the metric for the space, and does not require that we were using an orthonormal basis. When the metric is Euclidean and we have an orthonormal basis, then all the indexes can be dropped.

The \mathbb{R}^{3} and \mathbb{R}^{4} cases follow in exactly the same way, we just need more vectors in the wedge products.

For the \mathbb{R}^{3} case we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)&=a^i b^j c^k(e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k) \\ &=a^i b^j c^k{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u]}\end{aligned}

Again, with a^i = A^{1 i} and b^j = A^{2 j}, and c^k = A^{3 k} we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i\end{aligned} \hspace{\stretch{1}}(2.31)

and we can choose to write this in either form, resulting in the identity

\begin{aligned}\boxed{\epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c} A^{a s} A^{b t} A^{c u}.}\end{aligned} \hspace{\stretch{1}}(2.32)

The \mathbb{R}^{4} case follows exactly the same way, and we have

\begin{aligned}(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c \wedge d)&=a^i b^j c^k d^l(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k \wedge e_l) \\ &=a^i b^j c^k d^l{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u} d^{v]}.\end{aligned}

This time with a^i = A^{0 i} and b^j = A^{1 j}, and c^k = A^{2 k}, and d^l = A^{3 l} we have

\begin{aligned}\boxed{\epsilon^{s t u v} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{0 i} A^{1 j} A^{2 k} A^{3 l}{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}.}\end{aligned} \hspace{\stretch{1}}(2.33)

This one is almost the identity to be established later in problem 1.4. We have only to raise and lower some indexes to get that one. Note that in the Minkowski standard basis above, because s, t, u, v must be a permutation of 0,1,2,3 for a non-zero result, we must have

\begin{aligned}\epsilon^{s t u v} = (-1)^3 (+1) \epsilon_{s t u v}.\end{aligned} \hspace{\stretch{1}}(2.34)

So raising and lowering the identity above gives us

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{A_{ij}}\right\rVert}=\epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d u}.\end{aligned} \hspace{\stretch{1}}(2.35)

No sign changes were required for the indexes a, b, c, d, since they are paired.

Until we did the raising and lowering operations here, there was no specific metric required, so our first result 2.33 is the more general one.

There’s one more part to this problem, doing the antisymmetric sums over the indexes s, t, \cdots. For the \mathbb{R}^{2} case we have

\begin{aligned}\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t}&=\epsilon_{s t} \epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2} \epsilon^{1 2} +\epsilon_{2 1} \epsilon^{2 1} \right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( 1^2 + (-1)^2\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned}

We conclude that

\begin{aligned}\boxed{\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t} = 2! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.36)

For the \mathbb{R}^{3} case we have the same operation

\begin{aligned}\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}&=\epsilon_{s t u} \epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2 3} \epsilon^{1 2 3} +\epsilon_{1 3 2} \epsilon^{1 3 2} + \cdots\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=(\pm 1)^2 (3!)\text{Det} {\left\lVert{A^{ij}}\right\rVert}.\end{aligned}

So we conclude

\begin{aligned}\boxed{\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}= 3! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.37)

It’s clear what the pattern is, and if we evaluate the sum of the antisymmetric tensor squares in \mathbb{R}^{4} we have

\begin{aligned}\epsilon_{s t u v} \epsilon_{s t u v}&=\epsilon_{0 1 2 3} \epsilon_{0 1 2 3}+\epsilon_{0 1 3 2} \epsilon_{0 1 3 2}+\epsilon_{0 2 1 3} \epsilon_{0 2 1 3}+ \cdots \\ &= (\pm 1)^2 (4!),\end{aligned}

So, for our SR case we have

\begin{aligned}\boxed{\epsilon_{s t u v} \epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}= 4! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.38)

This was part of question 1.4, albeit in lower index form. Here since all indexes are matched, we have the same result without major change

\begin{aligned}\boxed{\epsilon^{s t u v} \epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d v}= 4! \text{Det} {\left\lVert{A_{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.39)

The main difference is that we are now taking the determinant of a lower index tensor.

3. Statement. Rotational invariance of 3D antisymmetric tensor

Use the previous results to show that \epsilon_{\mu\nu\lambda} is invariant under rotations.

3. Solution

We apply transformations to coordinates (and thus indexes) of the form

\begin{aligned}x_\mu \rightarrow O_{\mu\nu} x_\nu\end{aligned} \hspace{\stretch{1}}(2.40)

With our tensor transforming as its indexes, we have

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\alpha\beta\sigma} O_{\mu\alpha} O_{\nu\beta} O_{\lambda\sigma}.\end{aligned} \hspace{\stretch{1}}(2.41)

We’ve got 2.32, which after dropping indexes, because we are in a Euclidean space, we have

\begin{aligned}\epsilon_{\mu \nu \lambda} \text{Det} {\left\lVert{A_{ij}}\right\rVert} = \epsilon_{\alpha \beta \sigma} A_{\alpha \mu} A_{\beta \nu} A_{\sigma \lambda}.\end{aligned} \hspace{\stretch{1}}(2.42)

Let A_{i j} = O_{j i}, which gives us

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\mu\nu\lambda} \text{Det} A^\text{T}\end{aligned} \hspace{\stretch{1}}(2.43)

but since \text{Det} O = \text{Det} O^\text{T}, we have shown that \epsilon_{\mu\nu\lambda} is invariant under rotation.

4. Statement. Rotational invariance of 4D antisymmetric tensor

Use the previous results to show that \epsilon_{i j k l} is invariant under Lorentz transformations.

4. Solution

This follows the same way. We assume a transformation of coordinates of the following form

\begin{aligned}(x')^i &= {O^i}_j x^j \\ (x')_i &= {O_i}^j x_j,\end{aligned} \hspace{\stretch{1}}(2.44)

where the determinant of {O^i}_j = 1 (sanity check of sign: {O^i}_j = {\delta^i}_j).

Our antisymmetric tensor transforms as its coordinates individually

\begin{aligned}\epsilon_{i j k l} &\rightarrow \epsilon_{a b c d} {O_i}^a{O_j}^b{O_k}^c{O_l}^d \\ &= \epsilon^{a b c d} O_{i a}O_{j b}O_{k c}O_{l d} \\ \end{aligned}

Let P_{ij} = O_{ji}, and raise and lower all the indexes in 2.46 for

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{P_{ij}}\right\rVert}=\epsilon^{a b c d} P_{a s} P_{b t} P_{c u} P_{d v}.\end{aligned} \hspace{\stretch{1}}(2.46)

We have

\begin{aligned}\epsilon_{i j k l} &= \epsilon^{a b c d} P_{a i}P_{a j}P_{a k}P_{a l} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{P_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{O_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{g_{im} {O^m}_j }\right\rVert} \\ &=-\epsilon_{i j k l} (-1)(1) \\ &=\epsilon_{i j k l}\end{aligned}

Since \epsilon_{i j k l} = -\epsilon^{i j k l} both are therefore invariant under Lorentz transformation.

5. Statement. Sum of contracting symmetric and antisymmetric rank 2 tensors

Show that A^{ij} B_{ij} = 0 if A is symmetric and B is antisymmetric.

5. Solution

We swap indexes in B, switch dummy indexes, then swap indexes in A

\begin{aligned}A^{i j} B_{i j} &= -A^{i j} B_{j i} \\ &= -A^{j i} B_{i j} \\ &= -A^{i j} B_{i j} \\ \end{aligned}

Our result is the negative of itself, so must be zero.

6. Statement. Characteristic equation for the electromagnetic strength tensor

Show that P(\lambda) = \text{Det} {\left\lVert{F_{i j} - \lambda g_{i j}}\right\rVert} is invariant under Lorentz transformations. Consider the polynomial of P(\lambda), also called the characteristic polynomial of the matrix {\left\lVert{F_{i j}}\right\rVert}. Find the coefficients of the expansion of P(\lambda) in powers of \lambda in terms of the components of {\left\lVert{F_{i j}}\right\rVert}. Use the result to argue that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariant.

6. Solution

The invariance of the determinant

Let’s consider how any lower index rank 2 tensor transforms. Given a transformation of coordinates

\begin{aligned}(x^i)' &= {O^i}_j x^j \\ (x_i)' &= {O_i}^j x^j ,\end{aligned} \hspace{\stretch{1}}(2.47)

where \text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = 1, and {O_i}^j = {O^m}_n g_{i m} g^{j n}. Let’s reflect briefly on why this determinant is unit valued. We have

\begin{aligned}(x^i)' (x_i)'= {O_i}^a x^a {O^i}_b x^b = x^b x_b,\end{aligned} \hspace{\stretch{1}}(2.49)

which implies that the transformation product is

\begin{aligned}{O_i}^a {O^i}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.50)

the identity matrix. The identity matrix has unit determinant, so we must have

\begin{aligned}1 = (\text{Det} \hat{G})^2 (\text{Det} {\left\lVert{ {O^i}_j }\right\rVert})^2.\end{aligned} \hspace{\stretch{1}}(2.51)

Since \text{Det} \hat{G} = -1 we have

\begin{aligned}\text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = \pm 1,\end{aligned} \hspace{\stretch{1}}(2.52)

which is all that we can say about the determinant of this class of transformations by considering just invariance. If we restrict the transformations of coordinates to those of the same determinant sign as the identity matrix, we rule out reflections in time or space. This seems to be the essence of the SO(1,3) labeling.

Why dwell on this? Well, I wanted to be clear on the conventions I’d chosen, since parts of the course notes used \hat{O} = {\left\lVert{O^{i j}}\right\rVert}, and X' = \hat{O} X, and gave that matrix unit determinant. That O^{i j} looks like it is equivalent to my {O^i}_j, except that the one in the course notes is loose when it comes to lower and upper indexes since it gives (x')^i = O^{i j} x^j.

I’ll write

\begin{aligned}\hat{O} = {\left\lVert{{O^i}_j}\right\rVert},\end{aligned} \hspace{\stretch{1}}(2.53)

and require this (not {\left\lVert{O^{i j}}\right\rVert}) to be the matrix with unit determinant. Having cleared the index upper and lower confusion I had trying to reconcile the class notes with the rules for index manipulation, let’s now consider the Lorentz transformation of a lower index rank 2 tensor (not necessarily antisymmetric or symmetric)

We have, transforming in the same fashion as a lower index coordinate four vector (but twice, once for each index)

\begin{aligned}A_{i j} \rightarrow A_{k m} {O_i}^k{O_j}^m.\end{aligned} \hspace{\stretch{1}}(2.54)

The determinant of the transformation tensor {O_i}^j is

\begin{aligned}\text{Det} {\left\lVert{ {O_i}^j }\right\rVert} = \text{Det} {\left\lVert{ g^{i m} {O^m}_n g^{n j} }\right\rVert} = (\text{Det} \hat{G}) (1) (\text{Det} \hat{G} ) = (-1)^2 (1) = 1.\end{aligned} \hspace{\stretch{1}}(2.55)

We see that the determinant of a lower index rank 2 tensor is invariant under Lorentz transformation. This would include our characteristic polynomial P(\lambda).

Expanding the determinant.

Utilizing 2.39 we can now calculate the characteristic polynomial. This is

\begin{aligned}\text{Det} {\left\lVert{F_{ij} - \lambda g_{ij} }\right\rVert}&= \frac{1}{{4!}}\epsilon^{s t u v} \epsilon^{a b c d} (F_{ a s } - \lambda g_{a s}) (F_{ b t } - \lambda g_{b t}) (F_{ c u } - \lambda g_{c u}) (F_{ d v } - \lambda g_{d v}) \\ &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} ({F^a}_s - \lambda {g^a}_s) ({F^b}_t - \lambda {g^b}_t) ({F^c}_u - \lambda {g^c}_u) ({F^d}_v - \lambda {g^d}_v) \\ \end{aligned}

However, {g^a}_b = g_{b c} g^{a c}, or {\left\lVert{{g^a}_b}\right\rVert} = \hat{G}^2 = I. This means we have

\begin{aligned}{g^a}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.56)

and our determinant is reduced to

\begin{aligned}\begin{aligned}P(\lambda) &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({F^a}_s {F^b}_t - \lambda( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) + \lambda^2 {\delta^a}_s {\delta^b}_t \Bigr) \\ &\times \qquad \qquad \Bigl({F^c}_u {F^d}_v - \lambda( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + \lambda^2 {\delta^c}_u {\delta^d}_v \Bigr) \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.57)

If we expand this out we have our powers of \lambda coefficients are

\begin{aligned}\lambda^0 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {F^a}_s {F^b}_t {F^c}_u {F^d}_v \\ \lambda^1 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ({\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) {F^a}_s {F^b}_t - ({\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {F^c}_u {F^d}_v \Bigr) \\ \lambda^2 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ \lambda^3 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {\delta^c}_u {\delta^d}_v - {\delta^a}_s {\delta^b}_t  ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) \Bigr) \\ \lambda^4 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^a}_s {\delta^b}_t {\delta^c}_u {\delta^d}_v \Bigr) \\ \end{aligned}

By 2.39 the \lambda^0 coefficient is just \text{Det} {\left\lVert{F_{i j}}\right\rVert}.

The \lambda^3 terms can be seen to be zero. For example, the first one is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^a}_s {F^b}_t {\delta^c}_u {\delta^d}_v &=-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{s b u v} {F^b}_t \\ &=-\frac{1}{{12}} \delta^{t}_b {F^b}_t \\ &=-\frac{1}{{12}} {F^b}_b \\ &=-\frac{1}{{12}} F^{bu} g_{ub} \\ &= 0,\end{aligned}

where the final equality to zero comes from summing a symmetric and antisymmetric product.

Similarly the \lambda coefficients can be shown to be zero. Again the first as a sample is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^c}_u {F^d}_v {F^a}_s {F^b}_t &=-\frac{1}{{24}} \epsilon^{u s t v} \epsilon_{u a b d} {F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} \delta^{[s}_a\delta^{t}_b\delta^{v]}_d{F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} {F^a}_{[s}{F^b}_{t}{F^d}_{v]} \\ \end{aligned}

Disregarding the -1/24 factor, let’s just expand this antisymmetric sum

\begin{aligned}{F^a}_{[a}{F^b}_{b}{F^d}_{d]}&={F^a}_{a}{F^b}_{b}{F^d}_{d}+{F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a}-{F^a}_{a}{F^b}_{d}{F^d}_{b}-{F^a}_{d}{F^b}_{b}{F^d}_{a}-{F^a}_{b}{F^b}_{a}{F^d}_{d} \\ &={F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a} \\ \end{aligned}

Of the two terms above that were retained, they are the only ones without a zero {F^i}_i factor. Consider the first part of this remaining part of the sum. Employing the metric tensor, to raise indexes so that the antisymmetry of F^{ij} can be utilized, and then finally relabeling all the dummy indexes we have

\begin{aligned}{F^a}_{d}{F^b}_{a}{F^d}_{b}&=F^{a u}F^{b v}F^{d w}g_{d u}g_{a v}g_{b w} \\ &=(-1)^3F^{u a}F^{v b}F^{w d}g_{d u}g_{a v}g_{b w} \\ &=-(F^{u a}g_{a v})(F^{v b}g_{b w} )(F^{w d}g_{d u})\\ &=-{F^u}_v{F^v}_w{F^w}_u\\ &=-{F^a}_b{F^b}_d{F^d}_a\\ \end{aligned}

This is just the negative of the second term in the sum, leaving us with zero.

Finally, we have for the \lambda^2 coefficient (\times 24)

\begin{aligned}&\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +{\delta^a}_s {F^b}_t {\delta^c}_u {F^d}_v +{\delta^b}_t {F^a}_s {\delta^d}_v {F^c}_u  \\ &\qquad +{\delta^b}_t {F^a}_s {\delta^c}_u {F^d}_v +{\delta^a}_s {F^b}_t {\delta^d}_v {F^c}_u + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{s t u v} \epsilon_{s b u d}  {F^b}_t  {F^d}_v +\epsilon^{s t u v} \epsilon_{a t c v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s t u v} \epsilon_{a t u d}  {F^a}_s  {F^d}_v +\epsilon^{s t u v} \epsilon_{s b c v}  {F^b}_t  {F^c}_u + \epsilon^{s t u v} \epsilon_{s t c d}    {F^c}_u {F^d}_v \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{t v s u } \epsilon_{b d s u}  {F^b}_t  {F^d}_v +\epsilon^{s u t v} \epsilon_{a c t v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s v t u} \epsilon_{a d t u}  {F^a}_s  {F^d}_v +\epsilon^{t u s v} \epsilon_{b c s v}  {F^b}_t  {F^c}_u + \epsilon^{u v s t} \epsilon_{c d s t}    {F^c}_u {F^d}_v \\ &=6\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t  \\ &=6 (2){\delta^{[s}}_a{\delta^{t]}}_b{F^a}_s {F^b}_t  \\ &=12{F^a}_{[a} {F^b}_{b]}  \\ &=12( {F^a}_{a} {F^b}_{b} - {F^a}_{b} {F^b}_{a} ) \\ &=-12 {F^a}_{b} {F^b}_{a} \\ &=-12 F^{a b} F_{b a} \\ &=12 F^{a b} F_{a b}\end{aligned}

Therefore, our characteristic polynomial is

\begin{aligned}\boxed{P(\lambda) = \text{Det} {\left\lVert{F_{i j}}\right\rVert} + \frac{\lambda^2}{2} F^{a b} F_{a b} + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.58)

Observe that in matrix form our strength tensors are

\begin{aligned}{\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \\ {\left\lVert{ F_{ij} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.59)

From these we can compute F^{a b} F_{a b} easily by inspection

\begin{aligned}F^{a b} F_{a b} = 2 (\mathbf{B}^2 - \mathbf{E}^2).\end{aligned} \hspace{\stretch{1}}(2.61)

Computing the determinant is not so easy. The dumb and simple way of expanding by cofactors takes two pages, and yields eventually

\begin{aligned}\text{Det} {\left\lVert{ F^{i j} }\right\rVert} = (\mathbf{E} \cdot \mathbf{B})^2.\end{aligned} \hspace{\stretch{1}}(2.62)

That supplies us with a relation for the characteristic polynomial in \mathbf{E} and \mathbf{B}

\begin{aligned}\boxed{P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.63)

Observe that we found this for the special case where \mathbf{E} and \mathbf{B} were perpendicular in homework 2. Observe that when we have that perpendicularity, we can solve for the eigenvalues by inspection

\begin{aligned}\lambda \in \{ 0, 0, \pm \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 } \},\end{aligned} \hspace{\stretch{1}}(2.64)

and were able to diagonalize the matrix {F^{i}}_j to solve the Lorentz force equation in parametric form. When {\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert} we had real eigenvalues and an orthogonal diagonalization when \mathbf{B} = 0. For the {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we had a two purely imaginary eigenvalues, and when \mathbf{E} = 0 this was a Hermitian diagonalization. For the general case, when one of \mathbf{E}, or \mathbf{B} was zero, things didn’t have the same nice closed form solution.

In general our eigenvalues are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(2.65)

For the purposes of this problem we really only wish to show that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariants. When \lambda = 0 we have P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2, a Lorentz invariant. This must mean that \mathbf{E} \cdot \mathbf{B} is itself a Lorentz invariant. Since that is invariant, and we require P(\lambda) to be invariant for any other possible values of \lambda, the difference \mathbf{E}^2 - \mathbf{B}^2 must also be Lorentz invariant.

7. Statement. Show that the pseudoscalar invariant has only boundary effects.

Use integration by parts to show that \int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l } only depends on the values of A^i(x) at the “boundary” of spacetime (e.g. the “surface” depicted on page 105 of the notes) and hence does not affect the equations of motion for the electromagnetic field.

7. Solution

This proceeds in a fairly straightforward fashion

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&=\int d^4 x \epsilon^{i j k l} (\partial_i A_j - \partial_j A_i) F_{ k l } \\ &=\int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } -\epsilon^{j i k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} \left( \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l }-A_j \frac{\partial { F_{ k l } }}{\partial {x^i}}\right)\\ \end{aligned}

Now, observe that by the Bianchi identity, this second term is zero

\begin{aligned}\epsilon^{i j k l} \frac{\partial { F_{ k l } }}{\partial {x^i}}=-\epsilon^{j i k l} \partial_i F_{ k l } = 0\end{aligned} \hspace{\stretch{1}}(2.66)

Now we have a set of perfect differentials, and can integrate

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&= 2 \int d^4 x \epsilon^{i j k l} \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l })\\ &= 2 \int dx^j dx^k dx^l\epsilon^{i j k l} {\left.{{(A_j F_{ k l })}}\right\vert}_{{\Delta x^i}}\\ \end{aligned}

We are left with a only contributions to the integral from the boundary terms on the spacetime hypervolume, three-volume normals bounding the four-volume integration in the original integral.

8. Statement. Electromagnetic duality transformations.

Show that the Maxwell equations in vacuum are invariant under the transformation: F_{i j} \rightarrow \tilde{F}_{i j}, where \tilde{F}_{i j} = \frac{1}{{2}} \epsilon_{i j k l} F^{k l} is the dual electromagnetic stress tensor. Replacing F with \tilde{F} is known as “electric-magnetic duality”. Explain this name by considering the transformation in terms of \mathbf{E} and \mathbf{B}. Are the Maxwell equations with sources invariant under electric-magnetic duality transformations?

8. Solution

Let’s first consider the explanation of the name. First recall what the expansions are of F_{i j} and F^{i j} in terms of \mathbf{E} and \mathbf{E}. These are

\begin{aligned}F_{0 \alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x^\alpha}} \\ &= E_\alpha\end{aligned}

with F^{0 \alpha} = -E^\alpha, and E^\alpha = E_\alpha.

The magnetic field components are

\begin{aligned}F_{\beta \alpha} &= \partial_\beta A_\alpha - \partial_\alpha A_\beta \\ &= -\partial_\beta A^\alpha + \partial_\alpha A^\beta \\ &= \epsilon_{\alpha \beta \sigma} B^\sigma\end{aligned}

with F^{\beta \alpha} = \epsilon^{\alpha \beta \sigma} B_\sigma and B_\sigma = B^\sigma.

Now let’s expand the dual tensors. These are

\begin{aligned}\tilde{F}_{0 \alpha} &=\frac{1}{{2}} \epsilon_{0 \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} F^{\beta \sigma} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\sigma \beta \mu} B_\mu \\ &=-\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\mu \beta \sigma} B_\mu \\ &=-\frac{1}{{2}} (2!) {\delta_\alpha}^\mu B_\mu \\ &=- B_\alpha \\ \end{aligned}

and

\begin{aligned}\tilde{F}_{\beta \alpha} &=\frac{1}{{2}} \epsilon_{\beta \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \left(\epsilon_{\beta \alpha 0 \sigma} F^{0 \sigma} +\epsilon_{\beta \alpha \sigma 0} F^{\sigma 0} \right) \\ &=\epsilon_{0 \beta \alpha \sigma} (-E^\sigma) \\ &=\epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned}

Summarizing we have

\begin{aligned}F_{0 \alpha} &= E^\alpha \\ F^{0 \alpha} &= -E^\alpha \\ F^{\beta \alpha} &= F_{\beta \alpha} = \epsilon_{\alpha \beta \sigma} B^\sigma \\ \tilde{F}_{0 \alpha} &= - B_\alpha \\ \tilde{F}^{0 \alpha} &= B_\alpha \\ \tilde{F}_{\beta \alpha} &= \tilde{F}^{\beta \alpha} = \epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned} \hspace{\stretch{1}}(2.67)

Is there a sign error in the \tilde{F}_{0 \alpha} = - B_\alpha result? Other than that we have the same sort of structure for the tensor with E and B switched around.

Let’s write these in matrix form, to compare

\begin{aligned}\begin{array}{l l l l}{\left\lVert{ \tilde{F}_{i j} }\right\rVert} &= \begin{bmatrix}0 & -B_x & -B_y & -B_z \\ B_x & 0 & -E_z & E_y \\ B_y & E_z & 0 & E_x \\ B_z & -E_y & -E_x & 0 \\ \end{bmatrix} ^{i j} }\right\rVert} &= \begin{bmatrix}0 & B_x & B_y & B_z \\ -B_x & 0 & -E_z & E_y \\ -B_y & E_z & 0 & -E_x \\ -B_z & -E_y & E_x & 0 \\ \end{bmatrix} \\ {\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{array}\end{aligned} \hspace{\stretch{1}}(2.73)

From these we can see by inspection that we have

\begin{aligned}\tilde{F}^{i j} F_{ij} = \tilde{F}_{i j} F^{ij} = 4 (\mathbf{E} \cdot \mathbf{B})\end{aligned} \hspace{\stretch{1}}(2.74)

This is consistent with the stated result in [1] (except for a factor of c due to units differences), so it appears the signs above are all kosher.

Now, let’s see if the if the dual tensor satisfies the vacuum equations.

\begin{aligned}\partial_j \tilde{F}^{i j}&=\partial_j \frac{1}{{2}} \epsilon^{i j k l} F_{k l} \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j (\partial_k A_l - \partial_l A_k) \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j \partial_k A_l - \frac{1}{{2}} \epsilon^{i j l k} \partial_k A_l \\ &=\frac{1}{{2}} (\epsilon^{i j k l} - \epsilon^{i j k l} \partial_k A_l \\ &= 0 \qquad\square\end{aligned}

So the first checks out, provided we have no sources. If we have sources, then we see here that Maxwell’s equations do not hold since this would imply that the four current density must be zero.

How about the Bianchi identity? That gives us

\begin{aligned}\epsilon^{i j k l} \partial_j \tilde{F}_{k l} &=\epsilon^{i j k l} \partial_j \frac{1}{{2}} \epsilon_{k l a b} F^{a b} \\ &=\frac{1}{{2}} \epsilon^{k l i j} \epsilon_{k l a b} \partial_j F^{a b} \\ &=\frac{1}{{2}} (2!) {\delta^i}_{[a} {\delta^j}_{b]} \partial_j F^{a b} \\ &=\partial_j (F^{i j} - F^{j i} ) \\ &=2 \partial_j F^{i j} .\end{aligned}

The factor of two is slightly curious. Is there a mistake above? If there is a mistake, it doesn’t change the fact that Maxwell’s equation

\begin{aligned}\partial_k F^{k i} = \frac{4 \pi}{c} j^i\end{aligned} \hspace{\stretch{1}}(2.75)

Gives us zero for the Bianchi identity under source free conditions of j^i = 0.

Problem 2. Transformation properties of \mathbf{E} and \mathbf{B}, again.

1. Statement

Use the form of F^{i j} from page 82 in the class notes, the transformation law for {\left\lVert{ F^{i j} }\right\rVert} given further down that same page, and the explicit form of the SO(1,3) matrix \hat{O} (say, corresponding to motion in the positive x_1 direction with speed v) to derive the transformation law of the fields \mathbf{E} and \mathbf{B}. Use the transformation law to find the electromagnetic field of a charged particle moving with constant speed v in the positive x_1 direction and check that the result agrees with the one that you obtained in Homework 2.

1. Solution

Given a transformation of coordinates

\begin{aligned}{x'}^i \rightarrow {O^i}_j x^j\end{aligned} \hspace{\stretch{1}}(3.76)

our rank 2 tensor F^{i j} transforms as

\begin{aligned}F^{i j} \rightarrow {O^i}_aF^{a b}{O^j}_b.\end{aligned} \hspace{\stretch{1}}(3.77)

Introducing matrices

\begin{aligned}\hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ \hat{F} &= {\left\lVert{F^{ij}}\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.78)

and noting that \hat{O}^\text{T} = {\left\lVert{{O^j}_i}\right\rVert}, we can express the electromagnetic strength tensor transformation as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}.\end{aligned} \hspace{\stretch{1}}(3.80)

The class notes use {x'}^i \rightarrow O^{ij} x^j, which violates our conventions on mixed upper and lower indexes, but the end result 3.80 is the same.

\begin{aligned}{\left\lVert{{O^i}_j}\right\rVert} =\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.81)

Writing

\begin{aligned}C &= \cosh\alpha = \gamma \\ S &= -\sinh\alpha = -\gamma \beta,\end{aligned} \hspace{\stretch{1}}(3.82)

we can compute the transformed field strength tensor

\begin{aligned}\hat{F}' &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \\ &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}- S E_x        & -C E_x        & -E_y  & -E_z \\ C E_x          & S E_x         & -B_z  & B_y \\ C E_y + S B_z  & S E_y + C B_z & 0     & -B_x \\ C E_z - S B_y  & S E_z - C B_y & B_x   & 0 \end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -C E_y - S B_z & - C E_z + S B_y \\ E_x & 0 & -S E_y - C B_z & - S E_z + C B_y \\ C E_y + S B_z & S E_y + C B_z & 0 & -B_x \\ C E_z - S B_y & S E_z - C B_y & B_x & 0\end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -\gamma(E_y - \beta B_z) & - \gamma(E_z + \beta B_y) \\ E_x & 0 & - \gamma (-\beta E_y + B_z) & \gamma( \beta E_z + B_y) \\ \gamma (E_y - \beta B_z) & \gamma(-\beta E_y + B_z) & 0 & -B_x \\ \gamma (E_z + \beta B_y) & -\gamma(\beta E_z + B_y) & B_x & 0\end{bmatrix}.\end{aligned}

As a check we have the antisymmetry that is expected. There is also a regularity to the end result that is aesthetically pleasing, hinting that things are hopefully error free. In coordinates for \mathbf{E} and \mathbf{B} this is

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y - \beta B_z ) \\ E_z &\rightarrow \gamma ( E_z + \beta B_y ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y + \beta E_z ) \\ B_z &\rightarrow \gamma ( B_z - \beta E_y ) \end{aligned} \hspace{\stretch{1}}(3.84)

Writing \boldsymbol{\beta} = \mathbf{e}_1 \beta, we have

\begin{aligned}\boldsymbol{\beta} \times \mathbf{B} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \beta & 0 & 0 \\ B_x & B_y & B_z\end{vmatrix} = \mathbf{e}_2 (-\beta B_z) + \mathbf{e}_3( \beta B_y ),\end{aligned} \hspace{\stretch{1}}(3.90)

which puts us enroute to a tidier vector form

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y + (\boldsymbol{\beta} \times \mathbf{B})_y ) \\ E_z &\rightarrow \gamma ( E_z + (\boldsymbol{\beta} \times \mathbf{B})_z ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y - (\boldsymbol{\beta} \times \mathbf{E})_y ) \\ B_z &\rightarrow \gamma ( B_z - (\boldsymbol{\beta} \times \mathbf{E})_z ).\end{aligned} \hspace{\stretch{1}}(3.91)

For a vector \mathbf{A}, write \mathbf{A}_\parallel = (\mathbf{A} \cdot \hat{\mathbf{v}})\hat{\mathbf{v}}, \mathbf{A}_\perp = \mathbf{A} - \mathbf{A}_\parallel, allowing a compact description of the field transformation

\begin{aligned}\mathbf{E} &\rightarrow \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp + \gamma (\boldsymbol{\beta} \times \mathbf{B})_\perp \\ \mathbf{B} &\rightarrow \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp - \gamma (\boldsymbol{\beta} \times \mathbf{E})_\perp.\end{aligned} \hspace{\stretch{1}}(3.97)

Now, we want to consider the field of a moving particle. In the particle’s (unprimed) rest frame the field due to its potential \phi = q/r is

\begin{aligned}\mathbf{E} &= \frac{q}{r^2} \hat{\mathbf{r}} \\ \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(3.99)

Coordinates for a “stationary” observer, who sees this particle moving along the x-axis at speed v are related by a boost in the -v direction

\begin{aligned}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}\begin{bmatrix}\gamma & \gamma (v/c) & 0 & 0 \\ \gamma (v/c) & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.101)

Therefore the fields in the observer frame will be

\begin{aligned}\mathbf{E}' &= \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp - \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{B})_\perp = \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp \\ \mathbf{B}' &= \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp + \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp = \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp \end{aligned} \hspace{\stretch{1}}(3.102)

More explicitly with \mathbf{E} = \frac{q}{r^3}(x, y, z) this is

\begin{aligned}\mathbf{E}' &= \frac{q}{r^3}(x, \gamma y, \gamma z) \\ \mathbf{B}' &= \gamma \frac{q v}{c r^3} ( 0, -z, y )\end{aligned} \hspace{\stretch{1}}(3.104)

Comparing to Problem 3 in Problem set 2, I see that this matches the result obtained by separately transforming the gradient, the time partial, and the scalar potential. Actually, if I am being honest, I see that I made a sign error in all the coordinates of \mathbf{E}' when I initially did (this ungraded problem) in problem set 2. That sign error should have been obvious by considering the v=0 case which would have mysteriously resulted in inversion of all the coordinates of the observed electric field.

2. Statement

A particle is moving with velocity \mathbf{v} in perpendicular \mathbf{E} and \mathbf{B} fields, all given in some particular “stationary” frame of reference.

\begin{enumerate}
\item Show that there exists a frame where the problem of finding the particle trajectory can be reduced to having either only an electric or only a magnetic field.
\item Explain what determines which case takes place.
\item Find the velocity \mathbf{v}_0 of that frame relative to the “stationary” frame.
\end{enumerate}

2. Solution

\paragraph{Part 1 and 2:} Existence of the transformation.

In the single particle Lorentz trajectory problem we wish to solve

\begin{aligned}m c \frac{du^i}{ds} = \frac{e}{c} F^{i j} u_j,\end{aligned} \hspace{\stretch{1}}(3.106)

which in matrix form we can write as

\begin{aligned}\frac{d U}{ds} = \frac{e}{m c^2} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.107)

where we write our column vector proper velocity as U = {\left\lVert{u^i}\right\rVert}. Under transformation of coordinates {u'}^i = {O^i}_j x^j, with \hat{O} = {\left\lVert{{O^i}_j}\right\rVert}, this becomes

\begin{aligned}\hat{O} \frac{d U}{ds} = \frac{e}{m c^2} \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.108)

Suppose we can find eigenvectors for the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G}. That is for some eigenvalue \lambda, we can find an eigenvector \Sigma

\begin{aligned}\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \Sigma = \lambda \Sigma.\end{aligned} \hspace{\stretch{1}}(3.109)

Rearranging we have

\begin{aligned}(\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) \Sigma = 0\end{aligned} \hspace{\stretch{1}}(3.110)

and conclude that \Sigma lies in the null space of the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I and that this difference of matrices must have a zero determinant

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) = -\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda \hat{G}) = 0.\end{aligned} \hspace{\stretch{1}}(3.111)

Since \hat{G} = \hat{O} \hat{G} \hat{O}^\text{T} for any Lorentz transformation \hat{O} in SO(1,3), and \text{Det} ABC = \text{Det} A \text{Det} B \text{Det} C we have

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda G)= \text{Det} (\hat{F} - \lambda \hat{G}).\end{aligned} \hspace{\stretch{1}}(3.112)

In problem 1.6, we called this our characteristic equation P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}). Observe that the characteristic equation is Lorentz invariant for any \lambda, which requires that the eigenvalues \lambda are also Lorentz invariants.

In problem 1.6 of this problem set we computed that this characteristic equation expands to

\begin{aligned}P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.\end{aligned} \hspace{\stretch{1}}(3.113)

The eigenvalues for the system, also each necessarily Lorentz invariants, are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(3.114)

Observe that in the specific case where \mathbf{E} \cdot \mathbf{B} = 0, as in this problem, we must have \mathbf{E}' \cdot \mathbf{B}' in all frames, and the two non-zero eigenvalues of our characteristic polynomial are simply

\begin{aligned}\lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}.\end{aligned} \hspace{\stretch{1}}(3.115)

These and \mathbf{E} \cdot \mathbf{B} = 0 are the invariants for this system. If we have \mathbf{E}^2 > \mathbf{B}^2 in one frame, we must also have {\mathbf{E}'}^2 > {\mathbf{B}'}^2 in another frame, still maintaining perpendicular fields. In particular if \mathbf{B}' = 0 we maintain real eigenvalues. Similarly if \mathbf{B}^2 > \mathbf{E}^2 in some frame, we must always have imaginary eigenvalues, and this is also true in the \mathbf{E}' = 0 case.

While the problem can be posed as a pure diagonalization problem (and even solved numerically this way for the general constant fields case), we can also work symbolically, thinking of the trajectories problem as simply seeking a transformation of frames that reduce the scope of the problem to one that is more tractable. That does not have to be the linear transformation that diagonalizes the system. Instead we are free to transform to a frame where one of the two fields \mathbf{E}' or \mathbf{B}' is zero, provided the invariants discussed are maintained.

\paragraph{Part 3:} Finding the boost velocity that wipes out one of the fields.

Let’s now consider a Lorentz boost \hat{O}, and seek to solve for the boost velocity that wipes out one of the fields, given the invariants that must be maintained for the system

To make things concrete, suppose that our perpendicular fields are given by \mathbf{E} = E \mathbf{e}_2 and \mathbf{B} = B \mathbf{e}_3.

Let also assume that we can find the velocity \mathbf{v}_0 for which one or more of the transformed fields is zero. Suppose that velocity is

\begin{aligned}\mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) = v_0 \hat{\mathbf{v}}_0,\end{aligned} \hspace{\stretch{1}}(3.116)

where \alpha_i are the direction cosines of \mathbf{v}_0 so that \sum_i \alpha_i^2 = 1. We will want to compute the components of \mathbf{E} and \mathbf{B} parallel and perpendicular to this velocity.

Those are

\begin{aligned}\mathbf{E}_\parallel &= E \mathbf{e}_2 \cdot (\alpha_1, \alpha_2, \alpha_3) (\alpha_1, \alpha_2, \alpha_3) \\ &= E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) \\ \end{aligned}

\begin{aligned}\mathbf{E}_\perp &= E \mathbf{e}_2 - \mathbf{E}_\parallel \\ &= E (-\alpha_1 \alpha_2, 1 - \alpha_2^2, -\alpha_2 \alpha_3) \\ &= E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) \\ \end{aligned}

For the magnetic field we have

\begin{aligned}\mathbf{B}_\parallel &= B \alpha_3 (\alpha_1, \alpha_2, \alpha_3),\end{aligned}

and

\begin{aligned}\mathbf{B}_\perp &= B \mathbf{e}_3 - \mathbf{B}_\parallel \\ &= B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  \\ \end{aligned}

Now, observe that (\boldsymbol{\beta} \times \mathbf{B})_\parallel \propto ((\mathbf{v}_0 \times \mathbf{B}) \cdot \mathbf{v}_0) \mathbf{v}_0, but this is just zero. So we have (\boldsymbol{\beta} \times \mathbf{B})_\parallel = \boldsymbol{\beta} \times \mathbf{B}. So our cross products terms are just

\begin{aligned}\hat{\mathbf{v}}_0 \times \mathbf{B} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & 0 & B         \end{vmatrix} = B (\alpha_2, -\alpha_1, 0) \\ \hat{\mathbf{v}}_0 \times \mathbf{E} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & E & 0         \end{vmatrix} = E (-\alpha_3, 0, \alpha_1)\end{aligned}

We can now express how the fields transform, given this arbitrary boost velocity. From 3.97, this is

\begin{aligned}\mathbf{E} &\rightarrow E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) + \gamma E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) + \gamma \frac{v_0^2}{c^2} B (\alpha_2, -\alpha_1, 0) \\ \mathbf{B} &\rightarrowB \alpha_3 (\alpha_1, \alpha_2, \alpha_3)+ \gamma B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  - \gamma \frac{v_0^2}{c^2} E (-\alpha_3, 0, \alpha_1)\end{aligned} \hspace{\stretch{1}}(3.117)

Zero Electric field case.

Let’s tackle the two cases separately. First when {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we can transform to a frame where \mathbf{E}'=0. In coordinates from 3.117 this supplies us three sets of equations. These are

\begin{aligned}0 &= E \alpha_2 \alpha_1 (1 - \gamma) + \gamma \frac{v_0^2}{c^2} B \alpha_2  \\ 0 &= E \alpha_2^2 + \gamma E (\alpha_1^2 + \alpha_3^2) - \gamma \frac{v_0^2}{c^2} B \alpha_1  \\ 0 &= E \alpha_2 \alpha_3 (1 - \gamma).\end{aligned} \hspace{\stretch{1}}(3.119)

With an assumed solution the \mathbf{e}_3 coordinate equation implies that one of \alpha_2 or \alpha_3 is zero. Perhaps there are solutions with \alpha_3 = 0 too, but inspection shows that \alpha_2 = 0 nicely kills off the first equation. Since \alpha_1^2 + \alpha_2^2 + \alpha_3^2 = 1, that also implies that we are left with

\begin{aligned}0 = E - \frac{v_0^2}{c^2} B \alpha_1 \end{aligned} \hspace{\stretch{1}}(3.122)

Or

\begin{aligned}\alpha_1 &= \frac{E}{B} \frac{c^2}{v_0^2} \\ \alpha_2 &= 0 \\ \alpha_3 &= \sqrt{1 - \frac{E^2}{B^2} \frac{c^4}{v_0^4} }\end{aligned} \hspace{\stretch{1}}(3.123)

Our velocity was \mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) solving the problem for the {\left\lvert{\mathbf{B}}\right\rvert}^2 > {\left\lvert{\mathbf{E}}\right\rvert}^2 case up to an adjustable constant v_0. That constant comes with constraints however, since we must also have our cosine \alpha_1 \le 1. Expressed another way, the magnitude of the boost velocity is constrained by the relation

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{E}{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.126)

It appears we may also pick the equality case, so one velocity (not unique) that should transform away the electric field is

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.127)

This particular boost direction is perpendicular to both fields. Observe that this highlights the invariance condition {\left\lvert{\frac{E}{B}}\right\rvert} < 1 since we see this is required for a physically realizable velocity. Boosting in this direction will reduce our problem to one that has only the magnetic field component.

Zero Magnetic field case.

Now, let’s consider the case where we transform the magnetic field away, the case when our characteristic polynomial has strictly real eigenvalues \lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}. In this case, if we write out our equations for the transformed magnetic field and require these to separately equal zero, we have

\begin{aligned}0 &= B \alpha_3 \alpha_1 ( 1 - \gamma ) + \gamma \frac{v_0^2}{c^2} E \alpha_3 \\ 0 &= B \alpha_2 \alpha_3 ( 1 - \gamma ) \\ 0 &= B (\alpha_3^2 + \gamma (\alpha_1^2 + \alpha_2^2)) - \gamma \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.128)

Similar to before we see that \alpha_3 = 0 kills off the first and second equations, leaving just

\begin{aligned}0 = B - \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.131)

We now have a solution for the family of direction vectors that kill the magnetic field off

\begin{aligned}\alpha_1 &= \frac{B}{E} \frac{c^2}{v_0^2} \\ \alpha_2 &= \sqrt{ 1 - \frac{B^2}{E^2} \frac{c^4}{v_0^4} } \\ \alpha_3 &= 0.\end{aligned} \hspace{\stretch{1}}(3.132)

In addition to the initial constraint that {\left\lvert{\frac{B}{E}}\right\rvert} < 1, we have as before, constraints on the allowable values of v_0

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{B}{E}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.135)

Like before we can pick the equality \alpha_1^2 = 1, yielding a boost direction of

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.136)

Again, we see that the invariance condition {\left\lvert{\mathbf{B}}\right\rvert} < {\left\lvert{\mathbf{E}}\right\rvert} is required for a physically realizable velocity if that velocity is entirely perpendicular to the fields.

Problem 3. Continuity equation for delta function current distributions.

Statement

Show explicitly that the electromagnetic 4-current j^i for a particle moving with constant velocity (considered in class, p. 100-101 of notes) is conserved \partial_i j^i = 0. Give a physical interpretation of this conservation law, for example by integrating \partial_i j^i over some spacetime region and giving an integral form to the conservation law (\partial_i j^i = 0 is known as the “continuity equation”).

Solution

First lets review. Our four current was defined as

\begin{aligned}j^i(x) = \sum_A c e_A \int_{x(\tau)} dx_A^i(\tau) \delta^4(x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(4.137)

If each of the trajectories x_A(\tau) represents constant motion we have

\begin{aligned}x_A(\tau) = x_A(0) + \gamma_A \tau ( c, \mathbf{v}_A ).\end{aligned} \hspace{\stretch{1}}(4.138)

The spacetime split of this four vector is

\begin{aligned}x_A^0(\tau) &= x_A^0(0) + \gamma_A \tau c \\ \mathbf{x}_A(\tau) &= \mathbf{x}_A(0) + \gamma_A \tau \mathbf{v},\end{aligned} \hspace{\stretch{1}}(4.139)

with differentials

\begin{aligned}dx_A^0(\tau) &= \gamma_A d\tau c \\ d\mathbf{x}_A(\tau) &= \gamma_A d\tau \mathbf{v}_A.\end{aligned} \hspace{\stretch{1}}(4.141)

Writing out the delta functions explicitly we have

\begin{aligned}\begin{aligned}j^i(x) = \sum_A &c e_A \int_{x(\tau)} dx_A^i(\tau) \delta(x^0 - x_A^0(0) - \gamma_A c \tau) \delta(x^1 - x_A^1(0) - \gamma_A v_A^1 \tau) \\ &\delta(x^2 - x_A^2(0) - \gamma_A v_A^2 \tau) \delta(x^3 - x_A^3(0) - \gamma_A v_A^3 \tau)\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.143)

So our time and space components of the current can be written

\begin{aligned}j^0(x) &= \sum_A c^2 e_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau) \\ \mathbf{j}(x) &= \sum_A c e_A \mathbf{v}_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau).\end{aligned} \hspace{\stretch{1}}(4.144)

Each of these integrals can be evaluated with respect to the time coordinate delta function leaving the distribution

\begin{aligned}j^0(x) &= \sum_A c e_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0))) \\ \mathbf{j}(x) &= \sum_A e_A \mathbf{v}_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0)))\end{aligned} \hspace{\stretch{1}}(4.146)

With this more general expression (multi-particle case) it should be possible to show that the four divergence is zero, however, the problem only asks for one particle. For the one particle case, we can make things really easy by taking the initial point in space and time as the origin, and aligning our velocity with one of the coordinates (say x).

Doing so we have the result derived in class

\begin{aligned}j = e \begin{bmatrix}c \\ v \\ 0 \\ 0 \end{bmatrix}\delta(x - v x^0/c)\delta(y)\delta(z).\end{aligned} \hspace{\stretch{1}}(4.148)

Our divergence then has only two portions

\begin{aligned}\frac{\partial {j^0}}{\partial {x^0}} &= e c (-v/c) \delta'(x - v x^0/c) \delta(y) \delta(z) \\ \frac{\partial {j^1}}{\partial {x}} &= e v \delta'(x - v x^0/c) \delta(y) \delta(z).\end{aligned} \hspace{\stretch{1}}(4.149)

and these cancel out when summed. Note that this requires us to be loose with our delta functions, treating them like regular functions that are differentiable.

For the more general multiparticle case, we can treat the sum one particle at a time, and in each case, rotate coordinates so that the four divergence only picks up one term.

As for physical interpretation via integral, we have using the four dimensional divergence theorem

\begin{aligned}\int d^4 x \partial_i j^i = \int j^i dS_i\end{aligned} \hspace{\stretch{1}}(4.151)

where dS_i is the three-volume element perpendicular to a x^i = \text{constant} plane. These volume elements are detailed generally in the text [2], however, they do note that one special case specifically dS_0 = dx dy dz, the element of the three-dimensional (spatial) volume “normal” to hyperplanes ct = \text{constant}.

Without actually computing the determinants, we have something that is roughly of the form

\begin{aligned}0 = \int j^i dS_i=\int c \rho dx dy dz+\int \mathbf{j} \cdot (\mathbf{n}_x c dt dy dz + \mathbf{n}_y c dt dx dz + \mathbf{n}_z c dt dx dy).\end{aligned} \hspace{\stretch{1}}(4.152)

This is cheating a bit to just write \mathbf{n}_x, \mathbf{n}_y, \mathbf{n}_z. Are there specific orientations required by the metric. To be precise we’d have to calculate the determinants detailed in the text, and then do the duality transformations.

Per unit time, we can write instead

\begin{aligned}\frac{\partial {}}{\partial {t}} \int \rho dV= -\int \mathbf{j} \cdot (\mathbf{n}_x dy dz + \mathbf{n}_y dx dz + \mathbf{n}_z dx dy)\end{aligned} \hspace{\stretch{1}}(4.153)

Rather loosely this appears to roughly describe that the rate of change of charge in a volume must be matched with the “flow” of current through the surface within that amount of time.

References

[1] Wikipedia. Electromagnetic tensor — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 27-February-2011]. http://en.wikipedia.org/w/index.php?title=Electromagnetic_tensor&oldid=414989505.

[2] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

A problem on spherical harmonics.

Posted by peeterjoot on January 10, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

One of the PHY356 exam questions from the final I recall screwing up on, and figuring it out after the fact on the drive home. The question actually clarified a difficulty I’d had, but unfortunately I hadn’t had the good luck to perform such a question, to help figure this out before the exam.

From what I recall the question provided an initial state, with some degeneracy in m, perhaps of the following form

\begin{aligned}{\lvert {\phi(0)} \rangle} = \sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle},\end{aligned} \hspace{\stretch{1}}(1.1)

and a Hamiltonian of the form

\begin{aligned}H = \alpha L_z\end{aligned} \hspace{\stretch{1}}(1.2)

From what I recall of the problem, I am going to reattempt it here now.

Evolved state.

One part of the question was to calculate the evolved state. Application of the time evolution operator gives us

\begin{aligned}{\lvert {\phi(t)} \rangle} = e^{-i \alpha L_z t/\hbar} \left(\sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle} \right).\end{aligned} \hspace{\stretch{1}}(1.3)

Now we note that L_z {\lvert {12} \rangle} = 2 \hbar {\lvert {12} \rangle}, and L_z {\lvert { l 0} \rangle} = 0 {\lvert {l 0} \rangle}, so the exponentials reduce this nicely to just

\begin{aligned}{\lvert {\phi(t)} \rangle} = \sqrt{\frac{1}{7}} e^{ -2 i \alpha t } {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.4)

Probabilities for L_z measurement outcomes.

I believe we were also asked what the probabilities for the outcomes of a measurement of L_z at this time would be. Here is one place that I think that I messed up, and it is really a translation error, attempting to get from the english description of the problem to the math description of the same. I’d had trouble with this process a few times in the problems, and managed to blunder through use of language like “measure”, and “outcome”, but don’t think I really understood how these were used properly.

What are the outcomes that we measure? We measure operators, but the result of a measurement is the eigenvalue associated with the operator. What are the eigenvalues of the L_z operator? These are the m \hbar values, from the operation L_z {\lvert {l m} \rangle} = m \hbar {\lvert {l m} \rangle}. So, given this initial state, there are really two outcomes that are possible, since we have two distinct eigenvalues. These are 2 \hbar and 0 for m = 2, and m= 0 respectively.

A measurement of the “outcome” 2 \hbar, will be the probability associated with the amplitude \left\langle{{ 1 2 }} \vert {{\phi(t)}}\right\rangle (ie: the absolute square of this value). That is

\begin{aligned}{\left\lvert{ \left\langle{{ 1 2 }} \vert {{\phi(t) }}\right\rangle }\right\rvert}^2 = \frac{1}{7}.\end{aligned} \hspace{\stretch{1}}(1.5)

Now, the only other outcome for a measurement of L_z for this state is a measurement of 0 \hbar, and the probability of this is then just 1 - \frac{1}{7} = \frac{6}{7}. On the exam, I think I listed probabilities for three outcomes, with values \frac{1}{7}, \frac{2}{7}, \frac{4}{7} respectively, but in retrospect that seems blatently wrong.

Probabilities for \mathbf{L}^2 measurement outcomes.

What are the probabilities for the outcomes for a measurement of \mathbf{L}^2 after this? The first question is really what are the outcomes. That’s really a question of what are the possible eigenvalues of \mathbf{L}^2 that can be measured at this point. Recall that we have

\begin{aligned}\mathbf{L}^2 {\lvert {l m} \rangle} = \hbar^2 l (l + 1) {\lvert {l m} \rangle}\end{aligned} \hspace{\stretch{1}}(1.6)

So for a state that has only l=1,2 contributions before the measurement, the eigenvalues that can be observed for the \mathbf{L}^2 operator are respectively 2 \hbar^2 and 6 \hbar^2 respectively.

For the l=2 case, our probability is 4/7, leaving 3/7 as the probability for measurement of the l=1 (2 \hbar^2) eigenvalue. We can compute this two ways, and it seems worthwhile to consider both. This first method makes use of the fact that the L_z operator leaves the state vector intact, but it also seems like a bit of a cheat. Consider instead two possible results of measurement after the L_z observation. When an L_z measurement of 0 \hbar is performed our state will be left with only the m=0 kets. That is

\begin{aligned}{\lvert {\psi_a} \rangle} = \frac{1}{{\sqrt{3}}} \left( {\lvert {10} \rangle} + \sqrt{2} {\lvert {20} \rangle} \right),\end{aligned} \hspace{\stretch{1}}(1.7)

whereas, when a 2 \hbar measurement of L_z is performed our state would then only have the m=2 contribution, and would be

\begin{aligned}{\lvert {\psi_b} \rangle} = e^{-2 i \alpha t} {\lvert {12 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.8)

We have two possible ways of measuring the 2 \hbar^2 eigenvalue for \mathbf{L}^2. One is when our state was {\lvert {\psi_a} \rangle} (, and the resulting state has a {\lvert {10} \rangle} component, and the other is after the m=2 measurement, where our state is left with a {\lvert {12} \rangle} component.

The resulting probability is then a conditional probability result

\begin{aligned}\frac{6}{7} {\left\lvert{ \left\langle{{10}} \vert {{\psi_a}}\right\rangle }\right\rvert}^2 + \frac{1}{7} {\left\lvert{ \left\langle{{12 }} \vert {{\psi_b}}\right\rangle}\right\rvert}^2 = \frac{3}{7}\end{aligned} \hspace{\stretch{1}}(1.9)

The result is the same, as expected, but this is likely a more convicing argument.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Some worked problems from old PHY356 exams.

Posted by peeterjoot on January 9, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

Some of the old exam questions that I did for preparation for the exam I liked, and thought I’d write up some of them for potential future reference.

Questions from the Dec 2007 PHY355H1F exam.

1b. Parity operator.

\paragraph{Q:} If \Pi is the parity operator, defined by \Pi {\lvert {x} \rangle} = {\lvert {-x} \rangle}, where {\lvert {x} \rangle} is the eigenket of the position operator X with eigenvalue x), and P is the momentum operator conjugate to X, show (carefully) that \Pi P \Pi = -P.

\paragraph{A:}

Consider the matrix element {\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}. This is

\begin{aligned}{\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}&={\langle {-x'} \rvert} \Pi P - P \Pi {\lvert {x} \rangle} \\ &={\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} - {\langle {-x} \rvert} P \Pi {\lvert {x} \rangle} \\ &={\langle {x'} \rvert} P {\lvert {x} \rangle} - {\langle {-x} \rvert} P {\lvert {-x} \rangle} \\ &=- i \hbar \left(\delta(x'-x) \frac{\partial {}}{\partial {x}}-\underbrace{\delta(-x -(-x'))}_{= \delta(x'-x) = \delta(x-x')} \frac{\partial {}}{\partial {-x}}\right) \\ &=- 2 i \hbar \delta(x'-x) \frac{\partial {}}{\partial {x}} \\ &=2 {\langle {x'} \rvert} P {\lvert {x} \rangle} \\ &=2 {\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} \\ \end{aligned}

We’ve taken advantage of the Hermitian property of P and \Pi here, and can rearrange for

\begin{aligned}{\langle {-x'} \rvert} \Pi P - P \Pi - 2 \Pi P {\lvert {x} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.1)

Since this is true for all {\langle {-x} \rvert} and {\lvert {x} \rangle} we have

\begin{aligned}\Pi P + P \Pi = 0.\end{aligned} \hspace{\stretch{1}}(2.2)

Right multiplication by \Pi and rearranging we have

\begin{aligned}\Pi P \Pi = - P \Pi \Pi = - P.\end{aligned} \hspace{\stretch{1}}(2.3)

1f. Free particle propagator.

\paragraph{Q:} For a free particle moving in one-dimension, the propagator (i.e. the coordinate representation of the evolution operator),

\begin{aligned}G(x,x';t) = {\langle {x} \rvert} U(t) {\lvert {x'} \rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

is given by

\begin{aligned}G(x,x';t) = \sqrt{\frac{m}{2 \pi i \hbar t}} e^{i m (x-x')^2/ (2 \hbar t)}.\end{aligned} \hspace{\stretch{1}}(2.5)

\paragraph{A:}

This problem is actually fairly straightforward, but it is nice to work it having had a similar problem set question where we were asked about this time evolution operator matrix element (ie: what it’s physical meaning is). Here we have a concrete example of the form of this matrix operator.

Proceeding directly, we have

\begin{aligned}{\langle {x} \rvert} U {\lvert {x'} \rangle}&=\int \left\langle{x} \vert {p'}\right\rangle {\langle {p'} \rvert} U {\lvert {p} \rangle} \left\langle{p} \vert {x'}\right\rangle dp dp' \\ &=\int u_{p'}(x) {\langle {p'} \rvert} e^{-i P^2 t/(2 m \hbar)} {\lvert {p} \rangle} u_p^{*}(x') dp dp' \\ &=\int u_{p'}(x) e^{-i p^2 t/(2 m \hbar)} \delta(p-p') u_p^{*}(x') dp dp' \\ &=\int u_{p}(x) e^{-i p^2 t/(2 m \hbar)} u_p^{*}(x') dp \\ &=\frac{1}{(\sqrt{2 \pi \hbar})^2} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi \hbar} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi} \int e^{i k (x-x')} e^{-i \hbar k^2 t/(2 m)} dk \\ &=\frac{1}{2 \pi} \int dk e^{- \left(k^2 \frac{ i \hbar t}{2m} - i k (x-x')\right)} \\ &=\frac{1}{2 \pi} \int dk e^{- \frac{ i \hbar t}{2m}\left(k - i \frac{2m}{i \hbar t}\frac{(x-x')}{2} \right)^2- \frac{i^2 2 m (x-x')^2}{4 i \hbar t} } \\ &=\frac{1}{2 \pi}  \sqrt{\pi} \sqrt{\frac{2m}{i \hbar t}}e^{\frac{ i m (x-x')^2}{2 \hbar t}},\end{aligned}

which is the desired result. Now, let’s look at how this would be used. We can express our time evolved state using this matrix element by introducing an identity

\begin{aligned}\left\langle{{x}} \vert {{\psi(t)}}\right\rangle &={\langle {x} \rvert} U {\lvert {\psi(0)} \rangle} \\ &=\int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ &=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)}\left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ \end{aligned}

This gives us

\begin{aligned}\psi(x, t)=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)} \psi(x', 0)\end{aligned} \hspace{\stretch{1}}(2.6)

However, note that our free particle wave function at time zero is

\begin{aligned}\psi(x, 0) = \frac{e^{i p x/\hbar}}{\sqrt{2 \pi \hbar}}\end{aligned} \hspace{\stretch{1}}(2.7)

So the convolution integral 2.6 does not exist. We likely have to require that the solution be not a pure state, but instead a superposition of a set of continuous states (a wave packet in position or momentum space related by Fourier transforms). That is

\begin{aligned}\psi(x, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \hat{\psi}(p, 0) e^{i p x/\hbar} dp \\ \hat{\psi}(p, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \psi(x'', 0) e^{-i p x''/\hbar} dx''\end{aligned} \hspace{\stretch{1}}(2.8)

The time evolution of this wave packet is then determined by the propagator, and is

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}} \frac{1}{{\sqrt{2 \pi \hbar}}} \int dx' dpe^{i m (x-x')^2/ (2 \hbar t)}\hat{\psi}(p, 0) e^{i p x'/\hbar} ,\end{aligned} \hspace{\stretch{1}}(2.10)

or in terms of the position space wave packet evaluated at time zero

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int dx' dx'' dke^{i m (x-x')^2/ (2 \hbar t)}e^{i k (x' - x'')} \psi(x'', 0)\end{aligned} \hspace{\stretch{1}}(2.11)

We see that the propagator also ends up with a Fourier transform structure, and we have

\begin{aligned}\psi(x,t) &= \int dx' U(x, x' ; t) \psi(x', 0) \\ U(x, x' ; t) &=\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int du dke^{i m (x - x' - u)^2/ (2 \hbar t)}e^{i k u }\end{aligned} \hspace{\stretch{1}}(2.12)

Does that Fourier transform exist? I’d not be surprised if it ended up with a delta function representation. I’ll hold off attempting to evaluate and reduce it until another day.

4. Hydrogen atom.

This problem deals with the hydrogen atom, with an initial ket

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{3}}} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {210} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {211} \rangle},\end{aligned} \hspace{\stretch{1}}(2.14)

where

\begin{aligned}\left\langle{\mathbf{r}} \vert {{100}}\right\rangle = \Phi_{100}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.15)

etc.

\paragraph{Q: (a)}

If no measurement is made until time t = t_0,

\begin{aligned}t_0 = \frac{\pi \hbar}{ \frac{3}{4} (13.6 \text{eV}) } = \frac{ 4 \pi \hbar }{ 3 E_I},\end{aligned} \hspace{\stretch{1}}(2.16)

what is the ket {\lvert {\psi(t)} \rangle} just before the measurement is made?

\paragraph{A:}

Our time evolved state is

\begin{aligned}{\lvert {\psi{t_0}} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i E_1 t_0 /\hbar } {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i E_2 t_0/\hbar } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.17)

Also observe that this initial time was picked to make the exponential values come out nicely, and we have

\begin{aligned}\frac{E_n t_0 }{\hbar} &= - \frac{E_I \pi \hbar }{\frac{3}{4} E_I n^2 \hbar} \\ &= - \frac{4 \pi }{ 3 n^2 },\end{aligned}

so our time evolved state is just

\begin{aligned}{\lvert {\psi(t_0)} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.18)

\paragraph{Q: (b)}

Suppose that at time t_0 an L_z measurement is made, and the outcome 0 is recorded. What is the appropriate ket \psi_{\text{after}}(t_0) right after the measurement?

\paragraph{A:}

A measurement with outcome 0, means that the L_z operator measurement found the state at that point to be the eigenstate for L_z eigenvalue 0. Recall that if {\lvert {\phi} \rangle} is an eigenstate of L_z we have

\begin{aligned}L_z {\lvert {\phi} \rangle} = m \hbar {\lvert {\phi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.19)

so a measurement of L_z with outcome zero means that we have m=0. Our measurement of L_z at time t_0 therefore filters out all but the m=0 states and our new state is proportional to the projection over all m=0 states as follows

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}&\propto \left( \sum_{n l} {\lvert {n l 0} \rangle}{\langle {n l 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &\propto \left( {\lvert {1 0 0} \rangle}{\langle {1 0 0} \rvert} +{\lvert {2 1 0} \rangle}{\langle {2 1 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &= \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } {\lvert {210} \rangle} \end{aligned}

A final normalization yields

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}= \frac{1}{{\sqrt{2}}} ({\lvert {210} \rangle} - {\lvert {100} \rangle})\end{aligned} \hspace{\stretch{1}}(2.20)

\paragraph{Q: (c)}

Right after this L_z measurement, what is {\left\lvert{\psi_{\text{after}}(t_0)}\right\rvert}^2?

\paragraph{A:}

Our amplitude is

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle&= \frac{1}{{\sqrt{2}}} (\left\langle{\mathbf{r}} \vert {{210}}\right\rangle - \left\langle{\mathbf{r}} \vert {{100}}\right\rangle) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}\left(\frac{r}{4\sqrt{2} a_0} e^{-r/2a_0} \cos\theta-e^{-r/a_0}\right) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}e^{-r/2 a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right),\end{aligned}

so the probability density is

\begin{aligned}{\left\lvert{\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle}\right\rvert}^2= \frac{1}{{2 \pi a_0^3}}e^{-r/a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right)^2 \end{aligned} \hspace{\stretch{1}}(2.21)

\paragraph{Q: (d)}

If then a position measurement is made immediately, which if any components of the expectation value of \mathbf{R} will be nonvanishing? Justify your answer.

\paragraph{A:}

The expectation value of this vector valued operator with respect to a radial state {\lvert {\psi} \rangle} = \sum_{nlm} a_{nlm} {\lvert {nlm} \rangle} can be expressed as

\begin{aligned}\left\langle{\mathbf{R}}\right\rangle = \sum_{i=1}^3 \mathbf{e}_i \sum_{nlm, n'l'm'} a_{nlm}^{*} a_{n'l'm'} {\langle {nlm} \rvert} X_i{\lvert {n'l'm'} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

where X_1 = X = R \sin\Theta \cos\Phi, X_2 = Y = R \sin\Theta \sin\Phi, X_3 = Z = R \cos\Phi.

Consider one of the matrix elements, and expand this by introducing an identity twice

\begin{aligned}{\langle {nlm} \rvert} X_i {\lvert {n'l'm'} \rangle}&=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'\left\langle{{nlm}} \vert {{r \theta \phi}}\right\rangle {\langle {r \theta \phi} \rvert} X_i {\lvert {r' \theta' \phi' } \rangle}\left\langle{{r' \theta' \phi'}} \vert {{n'l'm'}}\right\rangle \\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi)\delta^3(\mathbf{x} - \mathbf{x}') x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \\ &\qquad{r'}^2 \sin\theta' \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phidr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phiR_{nl}(r) R_{n'l'}(r) Y_{lm}^{*}(\theta,\phi) Y_{l'm'}(\theta,\phi)x_i\\ \end{aligned}

Because our state has only m=0 contributions, the only \phi dependence for the X and Y components of \mathbf{R} come from those components themselves. For X, we therefore integrate \int_0^{2\pi} \cos\phi d\phi = 0, and for Y we integrate \int_0^{2\pi} \sin\phi d\phi = 0, and these terms vanish. Our expectation value for \mathbf{R} for this state, therefore lies completely on the z axis.

Questions from the Dec 2008 PHY355H1F exam.

1b. Trace invariance for unitary transformation.

\paragraph{Q:} Show that the trace of an operator is invariant under unitary transforms, i.e. if A' = U^\dagger A U, where U is a unitary operator, prove \text{Tr}(A') = \text{Tr}(A).

\paragraph{A:}

The bulk of this question is really to show that commutation of operators leaves the trace invariant (unless this is assumed). To show that we start with the definition of the trace

\begin{aligned}\text{Tr}(AB) &= \sum_n {\langle {n} \rvert} A B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {n} \rvert} A {\lvert {m} \rangle} {\langle {m} \rvert} B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {m} \rvert} B {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {m} \rangle} \\ &= \sum_{m} {\langle {m} \rvert} B A {\lvert {m} \rangle}.\end{aligned}

Thus we have

\begin{aligned}\text{Tr}(A B) = \text{Tr}( B A ).\end{aligned} \hspace{\stretch{1}}(3.23)

For the unitarily transformed operator we have

\begin{aligned}\text{Tr}(A') &= \text{Tr}( U^\dagger A U ) \\ &= \text{Tr}( U^\dagger (A U) ) \\ &= \text{Tr}( (A U) U^\dagger ) \\ &= \text{Tr}( A (U U^\dagger) ) \\ &= \text{Tr}( A ) \qquad \square\end{aligned}

1d. Determinant of an exponential operator in terms of trace.

\paragraph{Q:} If A is an Hermitian operator, show that

\begin{aligned}\text{Det}( \exp A ) = \exp ( \text{Tr}(A) )\end{aligned} \hspace{\stretch{1}}(3.24)

where the Determinant (\text{Det}) of an operator is the product of all its eigenvectors.

\paragraph{A:}

The eigenvalues clue in the question provides the starting point. We write the exponential in its series form

\begin{aligned}e^A = 1 + \sum_{k=1}^\infty \frac{1}{{k!}} A^k\end{aligned} \hspace{\stretch{1}}(3.25)

Now, suppose that we have the following eigenvalue relationships for A

\begin{aligned}A {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.26)

From this the exponential is

\begin{aligned}e^A {\lvert {n} \rangle} &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} A^k {\lvert {n} \rangle} \\ &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} (\lambda_n)^k {\lvert {n} \rangle} \\ &= e^{\lambda_n} {\lvert {n} \rangle}.\end{aligned}

We see that the eigenstates of e^A are those of A, with eigenvalues e^{\lambda_n}.

By the definition of the determinant given we have

\begin{aligned}\text{Det}( e^A ) &= \Pi_n e^{\lambda_n} \\ &= e^{\sum_n \lambda_n} \\ &= e^{\text{Tr}ace(A)} \qquad \square\end{aligned}

1e. Eigenvectors of the Harmonic oscillator creation operator.

\paragraph{Q:} Prove that the only eigenvector of the Harmonic oscillator creation operator is {\lvert {\text{null}} \rangle}.

\paragraph{A:}

Recall that the creation (raising) operator was given by

\begin{aligned}a^\dagger = \sqrt{\frac{m \omega}{2 \hbar}} X - \frac{ i }{\sqrt{2 m \omega \hbar} } P= \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P,\end{aligned} \hspace{\stretch{1}}(3.27)

where \alpha = \sqrt{\hbar/m \omega}. Now assume that a^\dagger {\lvert {\phi} \rangle} = \lambda {\lvert {\phi} \rangle} so that

\begin{aligned}{\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle} = {\langle {x} \rvert} \lambda {\lvert {\phi} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.28)

Write \left\langle{{x}} \vert {{\phi}}\right\rangle = \phi(x), and expand the LHS using 3.27 for

\begin{aligned}\lambda \phi(x) &= {\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle}  \\ &= {\langle {x} \rvert} \left( \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P \right) {\lvert {\phi} \rangle} \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ i \alpha }{\sqrt{2} \hbar } (-i\hbar)\frac{\partial {}}{\partial {x}} \phi(x) \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ \alpha }{\sqrt{2} } \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

As usual write \xi = x/\alpha, and rearrange. This gives us

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} +\sqrt{2} \lambda \phi - \xi \phi = 0.\end{aligned} \hspace{\stretch{1}}(3.29)

Observe that this can be viewed as a homogeneous LDE of the form

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} - \xi \phi = 0,\end{aligned} \hspace{\stretch{1}}(3.30)

augmented by a forcing term \sqrt{2}\lambda \phi. The homogeneous equation has the solution \phi = A e^{\xi^2/2}, so for the complete equation we assume a solution

\begin{aligned}\phi(\xi) = A(\xi) e^{\xi^2/2}.\end{aligned} \hspace{\stretch{1}}(3.31)

Since \phi' = (A' + A \xi) e^{\xi^2/2}, we produce a LDE of

\begin{aligned}0 &= (A' + A \xi -\xi A + \sqrt{2} \lambda A ) e^{\xi^2/2} \\ &= (A' + \sqrt{2} \lambda A ) e^{\xi^2/2},\end{aligned}

or

\begin{aligned}0 = A' + \sqrt{2} \lambda A.\end{aligned} \hspace{\stretch{1}}(3.32)

This has solution A = B e^{-\sqrt{2} \lambda \xi}, so our solution for 3.29 is

\begin{aligned}\phi(\xi) = B e^{\xi^2/2 - \sqrt{2} \lambda \xi} = B' e^{ (\xi - \lambda \sqrt{2} )^2/2}.\end{aligned} \hspace{\stretch{1}}(3.33)

This wave function is an imaginary Gaussian with minimum at \xi = \lambda\sqrt{2}. It is also unnormalizable since we require B' = 0 for any \lambda if \int {\left\lvert{\phi}\right\rvert}^2 < \infty. Since \left\langle{{\xi}} \vert {{\phi}}\right\rangle = \phi(\xi) = 0, we must also have {\lvert {\phi} \rangle} = 0, completing the exercise.

2. Two level quantum system.

Consider a two-level quantum system, with basis states \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\}. Suppose that the Hamiltonian for this system is given by

\begin{aligned}H = \frac{\hbar \Delta}{2} ( {\lvert {b} \rangle}{\langle {b} \rvert}- {\lvert {a} \rangle}{\langle {a} \rvert})+ i \frac{\hbar \Omega}{2} ( {\lvert {a} \rangle}{\langle {b} \rvert}- {\lvert {b} \rangle}{\langle {a} \rvert})\end{aligned} \hspace{\stretch{1}}(3.34)

where \Delta and \Omega are real positive constants.

\paragraph{Q: (a)} Find the energy eigenvalues and the normalized energy eigenvectors (expressed in terms of the \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\} basis). Write the time evolution operator U(t) = e^{-i H t/\hbar} using these eigenvectors.

\paragraph{A:}

The eigenvalue part of this problem is probably easier to do in matrix form. Let

\begin{aligned}{\lvert {a} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {b} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.35)

Our Hamiltonian is then

\begin{aligned}H = \frac{\hbar}{2} \begin{bmatrix}-\Delta & i \Omega \\ -i \Omega & \Delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.37)

Computing \det{H - \lambda I} = 0, we get

\begin{aligned}\lambda = \pm \frac{\hbar}{2} \sqrt{ \Delta^2 + \Omega^2 }.\end{aligned} \hspace{\stretch{1}}(3.38)

Let \delta = \sqrt{ \Delta^2 + \Omega^2 }. Our normalized eigenvectors are found to be

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\begin{bmatrix}i \Omega \\ \Delta \pm \delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.39)

In terms of {\lvert {a} \rangle} and {\lvert {b} \rangle}, we then have

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\left(i \Omega {\lvert {a} \rangle}+ (\Delta \pm \delta) {\lvert {b} \rangle} \right).\end{aligned} \hspace{\stretch{1}}(3.40)

Note that our Hamiltonian has a simple form in this basis. That is

\begin{aligned}H = \frac{\delta \hbar}{2} ({\lvert {+} \rangle}{\langle {+} \rvert} - {\lvert {-} \rangle}{\langle {-} \rvert} )\end{aligned} \hspace{\stretch{1}}(3.41)

Observe that once we do the diagonalization, we have a Hamiltonian that appears to have the form of a scaled projector for an open Stern-Gerlach aparatus.

Observe that the diagonalized Hamiltonian operator makes the time evolution operator’s form also simple, which is, by inspection

\begin{aligned}U(t) = e^{-i t \frac{\delta}{2}} {\lvert {+} \rangle}{\langle {+} \rvert} + e^{i t \frac{\delta}{2}} {\lvert {-} \rangle}{\langle {-} \rvert}.\end{aligned} \hspace{\stretch{1}}(3.42)

Since we are asked for this in terms of {\lvert {a} \rangle}, and {\lvert {b} \rangle}, the projectors {\lvert {\pm} \rangle}{\langle {\pm} \rvert} are required. These are

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} &= \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl( i \Omega {\lvert {a} \rangle} + (\Delta \pm \delta) {\lvert {b} \rangle} \Bigr)\Bigl( -i \Omega {\langle {a} \rvert} + (\Delta \pm \delta) {\langle {b} \rvert} \Bigr) \\ \end{aligned}

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} = \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl(\Omega^2 {\lvert {a} \rangle}{\langle {a} \rvert}+(\delta \pm \delta)^2 {\lvert {b} \rangle}{\langle {b} \rvert}+i \Omega (\Delta \pm \delta) ({\lvert {a} \rangle}{\langle {b} \rvert}-{\lvert {b} \rangle}{\langle {a} \rvert})\Bigr)\end{aligned} \hspace{\stretch{1}}(3.43)

Substitution into 3.42 and a fair amount of algebra leads to

\begin{aligned}U(t) = \cos(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} + {\lvert {b} \rangle}{\langle {b} \rvert} \Bigr)+ i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} - {\lvert {b} \rangle}{\langle {b} \rvert} -i ({\lvert {a} \rangle}{\langle {b} \rvert} - {\lvert {b} \rangle}{\langle {a} \rvert} )\Bigr).\end{aligned} \hspace{\stretch{1}}(3.44)

Note that while a big cumbersome, we can also verify that we can recover the original Hamiltonian from 3.41 and 3.43.

\paragraph{Q: (b)}

Suppose that the initial state of the system at time t = 0 is {\lvert {\phi(0)} \rangle}= {\lvert {b} \rangle}. Find an expression for the state at some later time t > 0, {\lvert {\phi(t)} \rangle}.

\paragraph{A:}

Most of the work is already done. Computation of {\lvert {\phi(t)} \rangle} = U(t) {\lvert {\phi(0)} \rangle} follows from 3.44

\begin{aligned}{\lvert {\phi(t)} \rangle} =\cos(\delta t/2) {\lvert {b} \rangle}- i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {b} \rangle} +i {\lvert {a} \rangle}\Bigr).\end{aligned} \hspace{\stretch{1}}(3.45)

\paragraph{Q: (c)}

Suppose that an observable, specified by the operator X = {\lvert {a} \rangle}{\langle {b} \rvert} + {\lvert {b} \rangle}{\langle {a} \rvert}, is measured for this system. What is the probabilbity that, at time t, the result 1 is obtained? Plot this probability as a function of time, showing the maximum and minimum values of the function, and the corresponding values of t.

\paragraph{A:}

The language of questions like these attempt to bring some physics into the mathematics. The phrase “the result 1 is obtained”, is really a statement that the operator X, after measurement is found to have the eigenstate with numeric value 1.

We can calcuate the eigenvectors for this operator easily enough and find them to be \pm 1. For the positive eigenvalue we can also compute the eigenstate to be

\begin{aligned}{\lvert {X+} \rangle} = \frac{1}{{\sqrt{2}}} \Bigl( {\lvert {a} \rangle} + {\lvert {b} \rangle} \Bigr).\end{aligned} \hspace{\stretch{1}}(3.46)

The question of what the probability for this measurement is then really a question asking for the computation of the amplitude

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.47)

From 3.45 we find this probability to be

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2&=\frac{1}{{2}} \left(\left(\cos(\delta t/2) + \frac{\Omega}{\delta} \sin(\delta t/2)\right)^2+ \frac{ \Omega^2 \sin^2(\delta t/2)}{\delta^2}\right) \\ &=\frac{1}{{4}} \left( 1 + 3 \frac{\Omega^2}{\delta^2} + \frac{\Delta^2}{\delta^2} \cos (\delta t) + 2 \frac{ \Omega}{\delta} \sin(\delta t) \right)\end{aligned}

We have a simple superposition of two sinusuiods out of phase, periodic with period 2 \pi/\delta. I’d attempted a rough sketch of this on paper, but won’t bother scanning it here or describing it further.

\paragraph{Q: (d)}

Suppose an experimenter has control over the values of the parameters \Delta and \Omega. Explain how she might prepare the state ({\lvert {a} \rangle} + {\lvert {b} \rangle})/\sqrt{2}.

\paragraph{A:}

For this part of the question I wasn’t sure what approach to take. I thought perhaps this linear combination of states could be made to equal one of the energy eigenstates, and if one could prepare the system in that state, then for certain values of \delta and \Delta one would then have this desired state.

To get there I note that we can express the states {\lvert {a} \rangle}, and {\lvert {b} \rangle} in terms of the eigenstates by inverting

\begin{aligned}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}=\frac{1}{{\sqrt{2\delta}}}\begin{bmatrix}\frac{i \Omega}{\sqrt{\delta + \Delta}} & \sqrt{\delta + \Delta} \\ \frac{i \Omega}{\sqrt{\delta - \Delta}} & -\sqrt{\delta - \Delta}\end{bmatrix}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.48)

Skipping all the algebra one finds

\begin{aligned}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}=\begin{bmatrix}-i\sqrt{\delta - \Delta} & -i\sqrt{\delta + \Delta} \\ \frac{\Omega}{\sqrt{\delta - \Delta}} &-\frac{\Omega}{\sqrt{\delta + \Delta}} \end{bmatrix}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.49)

Unfortunately, this doesn’t seem helpful. I find

\begin{aligned}\frac{1}{{\sqrt{2}}} ( {\lvert {a} \rangle} + {\lvert {b} \rangle} ) = \frac{{\lvert {+} \rangle}}{\sqrt{\delta - \Delta}}( \Omega - i (\delta - \Delta) )-\frac{{\lvert {-} \rangle}}{\sqrt{\delta + \Delta}}( \Omega + i (\delta + \Delta) )\end{aligned} \hspace{\stretch{1}}(3.50)

There’s no obvious way to pick \Omega and \Delta to leave just {\lvert {+} \rangle} or {\lvert {-} \rangle}. When I did this on paper originally I got a different answer for this sum, but looking at it now, I can’t see how I managed to get that answer (it had no factors of i in the result as the one above does).

3. One dimensional harmonic oscillator.

Consider a one-dimensional harmonic oscillator with the Hamiltonian

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(3.51)

Denote the ground state of the system by {\lvert {0} \rangle}, the first excited state by {\lvert {1} \rangle} and so on.

\paragraph{Q: (a)}
Evaluate {\langle {n} \rvert} X {\lvert {n} \rangle} and {\langle {n} \rvert} X^2 {\lvert {n} \rangle} for arbitrary {\lvert {n} \rangle}.

\paragraph{A:}

Writing X in terms of the raising and lowering operators we have

\begin{aligned}X = \frac{\alpha}{\sqrt{2}} (a^\dagger + a),\end{aligned} \hspace{\stretch{1}}(3.52)

so \left\langle{{X}}\right\rangle is proportional to

\begin{aligned}{\langle {n} \rvert} a^\dagger + a {\lvert {n} \rangle} = \sqrt{n+1} \left\langle{{n}} \vert {{n+1}}\right\rangle + \sqrt{n} \left\langle{{n}} \vert {{n-1}}\right\rangle = 0.\end{aligned} \hspace{\stretch{1}}(3.53)

For \left\langle{{X^2}}\right\rangle we have

\begin{aligned}\left\langle{{X^2}}\right\rangle&=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a)(a^\dagger + a) {\lvert {n} \rangle} \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a) \left( \sqrt{n+1} {\lvert {n+1} \rangle} + \sqrt{n-1} {\lvert {n-1} \rangle}\right)  \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} \Bigl( (n+1) {\lvert {n} \rangle} + \sqrt{n(n-1)} {\lvert {n-2} \rangle}+ \sqrt{(n+1)(n+2)} {\lvert {n+2} \rangle} + n {\lvert {n} \rangle} \Bigr).\end{aligned}

We are left with just

\begin{aligned}\left\langle{{X^2}}\right\rangle = \frac{\hbar}{2 m \omega} (2n + 1).\end{aligned} \hspace{\stretch{1}}(3.54)

\paragraph{Q: (b)}

Suppose that at t=0 the system is prepared in the state

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {0} \rangle} + i {\lvert {1} \rangle} ).\end{aligned} \hspace{\stretch{1}}(3.55)

If a measurement of position X were performaed immediately, sketch the propability distribution P(x) that a particle would be found within dx of x. Justify how you construct the sketch.

\paragraph{A:}

The probability that we started in state {\lvert {\psi(0)} \rangle} and ended up in position x is governed by the amplitude \left\langle{{x}} \vert {{\psi(0)}}\right\rangle, and the probability of being within an interval \Delta x, surrounding the point x is given by

\begin{aligned}\int_{x'=x-\Delta x/2}^{x+\Delta x/2} {\left\lvert{ \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 dx'.\end{aligned} \hspace{\stretch{1}}(3.56)

In the limit as \Delta x \rightarrow 0, this is just the squared amplitude itself evaluated at the point x, so we are interested in the quantity

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2  = \frac{1}{{2}} {\left\lvert{ \left\langle{{x}} \vert {{0}}\right\rangle + i \left\langle{{x}} \vert {{1}}\right\rangle }\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.57)

We are given these wave functions in the supplemental formulas. Namely,

\begin{aligned}\left\langle{{x}} \vert {{0}}\right\rangle &= \psi_0(x) = \frac{e^{-x^2/2\alpha^2}}{ \sqrt{\alpha \sqrt{\pi}}} \\ \left\langle{{x}} \vert {{1}}\right\rangle &= \psi_1(x) = \frac{e^{-x^2/2\alpha^2} 2 x }{ \alpha \sqrt{2 \alpha \sqrt{\pi}}}.\end{aligned} \hspace{\stretch{1}}(3.58)

Substituting these into 3.57 we have

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 = \frac{1}{{2}} e^{-x^2/\alpha^2}\frac{1}{{ \alpha \sqrt{\pi}}}{\left\lvert{ 1 + \frac{2 i x}{\alpha \sqrt{2} } }\right\rvert}^2=\frac{e^{-x^2/\alpha^2}}{ 2\alpha \sqrt{\pi}}\left( 1 + \frac{2 x^2}{\alpha^2 } \right).\end{aligned} \hspace{\stretch{1}}(3.60)

This \href{http://www.wolframalpha.com/input/?i=graph+e^(-x^2)+(1+

\paragraph{Q: (c)}

Now suppose the state given in (b) above were allowed to evolve for a time t, determine the expecation value of X and \Delta X at that time.

\paragraph{A:}

Our time evolved state is

\begin{aligned}U(t) {\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}}\left(e^{-i \hbar \omega \left( 0 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}+ i e^{-i \hbar \omega \left( 1 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}\right)=\frac{1}{{\sqrt{2}}}\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right).\end{aligned} \hspace{\stretch{1}}(3.61)

The position expectation is therefore

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}&= \frac{\alpha}{2 \sqrt{2}}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ \end{aligned}

We have already demonstrated that {\langle {n} \rvert} X {\lvert {n} \rangle} = 0, so we must only expand the cross terms, but those are just {\langle {0} \rvert} a^\dagger + a {\lvert {1} \rangle} = 1. This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}= \frac{\alpha}{2 \sqrt{2}}\left( -i e^{i \omega t} + i e^{-i \omega t} \right)=\sqrt{\frac{\hbar}{2 m \omega}} \cos(\omega t)\end{aligned} \hspace{\stretch{1}}(3.62)

For the squared position expectation

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle}&= \frac{\alpha^2}{4 (2)}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)^2\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ &=\frac{1}{{2}} ( {\langle {0} \rvert} X^2 {\lvert {0} \rangle} + {\langle {1} \rvert} X^2 {\lvert {1} \rangle} )+ i \frac{\alpha^2 }{8} ( - e^{ i \omega t} {\langle {1} \rvert} (a^\dagger + a)^2 {\lvert {0} \rangle}+ e^{ -i \omega t} {\langle {0} \rvert} (a^\dagger + a)^2 {\lvert {1} \rangle})\end{aligned}

Noting that (a^\dagger + a) {\lvert {0} \rangle} = {\lvert {1} \rangle}, and (a^\dagger + a)^2 {\lvert {0} \rangle} = (a^\dagger + a){\lvert {1} \rangle} = \sqrt{2} {\lvert {2} \rangle} + {\lvert {0} \rangle}, so we see the last two terms are zero. The first two we can evaluate using our previous result 3.54 which was \left\langle{{X^2}}\right\rangle = \frac{\alpha^2}{2} (2n + 1). This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle} = \alpha^2 \end{aligned} \hspace{\stretch{1}}(3.63)

Since \left\langle{{X}}\right\rangle^2 = \alpha^2 \cos^2(\omega t)/2, we have

\begin{aligned}(\Delta X)^2 = \left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = \alpha^2 \left(1 - \frac{1}{{2}} \cos^2(\omega t) \right)\end{aligned} \hspace{\stretch{1}}(3.64)

\paragraph{Q: (d)}

Now suppose that initially the system were prepared in the ground state {\lvert {0} \rangle}, and then the resonance frequency is changed abrubtly from \omega to \omega' so that the Hamiltonian becomes

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m {\omega'}^2 X^2.\end{aligned} \hspace{\stretch{1}}(3.65)

Immediately, an energy measurement is performed ; what is the probability of obtaining the result E = \hbar \omega' (3/2)?

\paragraph{A:}

This energy measurement E = \hbar \omega' (3/2) = \hbar \omega' (1 + 1/2), corresponds to an observation of state {\lvert {1'} \rangle}, after an initial observation of {\lvert {0} \rangle}. The probability of such a measurement is

\begin{aligned}{\left\lvert{ \left\langle{{1'}} \vert {{0}}\right\rangle }\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.66)

Note that

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\int dx \left\langle{{1'}} \vert {{x}}\right\rangle\left\langle{{x}} \vert {{0}}\right\rangle \\ &=\int dx \psi_{1'}^{*} \psi_0(x) \\ \end{aligned}

The wave functions above are

\begin{aligned}\phi_{1'}(x) &= \frac{ 2 x e^{-x^2/2 {\alpha'}^2 }}{ \alpha' \sqrt{ 2 \alpha' \sqrt{\pi} } } \\ \phi_{0}(x) &= \frac{ e^{-x^2/2 {\alpha}^2 } } { \sqrt{ \alpha \sqrt{\pi} } } \end{aligned} \hspace{\stretch{1}}(3.67)

Putting the pieces together we have

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\frac{2 }{ \alpha' \sqrt{ 2 \alpha' \alpha \pi } }\int dxx e^{-\frac{x^2}{2}\left( \frac{1}{{{\alpha'}^2}} + \frac{1}{{\alpha^2}} \right) }\end{aligned} \hspace{\stretch{1}}(3.69)

Since this is an odd integral kernel over an even range, this evaluates to zero, and we conclude that the probability of measuring the specified energy is zero when the system is initially prepared in the ground state associated with the original Hamiltonian. Intuitively this makes some sense, if one thinks of the Fourier coefficient problem: one cannot construct an even function from linear combinations of purely odd functions.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Notes and problems for Desai Chapter V.

Posted by peeterjoot on November 8, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter V notes for [1].

Notes

Problems

Problem 1.

Statement.

Obtain S_x, S_y, S_z for spin 1 in the representation in which S_z and S^2 are diagonal.

Solution.

For spin 1, we have

\begin{aligned}S^2 = 1 (1+1) \hbar^2 \mathbf{1}\end{aligned} \hspace{\stretch{1}}(3.1)

and are interested in the states {\lvert {1,-1} \rangle}, {\lvert {1, 0} \rangle}, and {\lvert {1,1} \rangle}. If, like angular momentum, we assume that we have for m_s = -1,0,1

\begin{aligned}S_z {\lvert {1,m_s} \rangle} = m_s \hbar {\lvert {1, m_s} \rangle}\end{aligned} \hspace{\stretch{1}}(3.2)

and introduce a column matrix representations for the kets as follows

\begin{aligned}{\lvert {1,1} \rangle} &=\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle} &=\begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix} \\ {\lvert {1,-1} \rangle} &=\begin{bmatrix}0 \\ 0 \\ -1\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.3)

then we have, by inspection

\begin{aligned}S_z &= \hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.6)

Note that, like the Pauli matrices, and unlike angular momentum, the spin states {\lvert {-1, m_s} \rangle}, {\lvert {0, m_s} \rangle} have not been considered. Do those have any physical interpretation?

That question aside, we can proceed as in the text, utilizing the ladder operator commutators

\begin{aligned}S_{\pm} &= S_x \pm i S_y,\end{aligned} \hspace{\stretch{1}}(3.7)

to determine the values of S_x and S_y indirectly. We find

\begin{aligned}\left[{S_{+}},{S_{-}}\right] &= 2 \hbar S_z \\ \left[{S_{+}},{S_{z}}\right] &= -\hbar S_{+} \\ \left[{S_{-}},{S_{z}}\right] &= \hbar S_{-}.\end{aligned} \hspace{\stretch{1}}(3.8)

Let

\begin{aligned}S_{+} &=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.11)

Looking for equality between \left[{S_{z}},{S_{+}}\right]/\hbar = S_{+}, we find

\begin{aligned}\begin{bmatrix}0 & b & 2 c \\ -d & 0 & f \\ -2g & -h & 0\end{bmatrix}&=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.12)

so we must have

\begin{aligned}S_{+} &=\begin{bmatrix}0 & b & 0 \\ 0 & 0 & f \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.13)

Furthermore, from \left[{S_{+}},{S_{-}}\right] = 2 \hbar S_z, we find

\begin{aligned}\begin{bmatrix}{\left\lvert{b}\right\rvert}^2 & 0 & 0 \\ 0 \right\rvert}^2 - {\left\lvert{b}\right\rvert}^2 & 0 \\ 0 & 0 & -{\left\lvert{f}\right\rvert}^2\end{bmatrix} &= 2 \hbar^2\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.14)

We must have {\left\lvert{b}\right\rvert}^2 = {\left\lvert{f}\right\rvert}^2 = 2 \hbar^2. We could probably pick any
b = \sqrt{2} \hbar e^{i\phi}, and f = \sqrt{2} \hbar e^{i\theta}, but assuming we have no reason for a non-zero phase we try

\begin{aligned}S_{+}&=\sqrt{2} \hbar\begin{bmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.15)

Putting all the pieces back together, with S_x = (S_{+} + S_{-})/2, and S_y = (S_{+} - S_{-})/2i, we finally have

\begin{aligned}S_x &=\frac{\hbar}{\sqrt{2}}\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0\end{bmatrix} \\ S_y &=\frac{\hbar}{\sqrt{2} i}\begin{bmatrix}0 & 1 & 0 \\ -1 & 0 & 1 \\ 0 & -1 & 0\end{bmatrix} \\ S_z &=\hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.16)

A quick calculation verifies that we have S_x^2 + S_y^2 + S_z^2 = 2 \hbar \mathbf{1}, as expected.

Problem 2.

Statement.

Obtain eigensolution for operator A = a \sigma_y + b \sigma_z. Call the eigenstates {\lvert {1} \rangle} and {\lvert {2} \rangle}, and determine the probabilities that they will correspond to \sigma_x = +1.

Solution.

The first part is straight forward, and we have

\begin{aligned}A &= a \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + b \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \\ &=\begin{bmatrix}b & -i a \\ ia & -b\end{bmatrix}.\end{aligned}

Taking {\left\lvert{A - \lambda I}\right\rvert} = 0 we get

\begin{aligned}\lambda &= \pm \sqrt{a^2 + b^2},\end{aligned} \hspace{\stretch{1}}(3.19)

with eigenvectors proportional to

\begin{aligned}{\lvert {\pm} \rangle} &=\begin{bmatrix}i a \\ b \mp \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.20)

The normalization constant is 1/\sqrt{2 (a^2 + b^2) \mp 2 b \sqrt{a^2 + b^2}}. Now we can call these {\lvert {1} \rangle}, and {\lvert {2} \rangle} but what does the last part of the question mean? What’s meant by \sigma_x = +1?

Asking the prof about this, he says:

“I think it means that the result of a measurement of the x component of spin is +1. This corresponds to the eigenvalue of \sigma_x being +1. The spin operator S_x has eigenvalue +\hbar/2”.

Aside: Question to consider later. Is is significant that {\langle {1} \rvert} \sigma_x {\lvert {1} \rangle} = {\langle {2} \rvert} \sigma_x {\lvert {2} \rangle} = 0?

So, how do we translate this into a mathematical statement?

First let’s recall a couple of details. Recall that the x spin operator has the matrix representation

\begin{aligned}\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.21)

This has eigenvalues \pm 1, with eigenstates (1,\pm 1)/\sqrt{2}. At the point when the x component spin is observed to be +1, the state of the system was then

\begin{aligned}{\lvert {x+} \rangle} =\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.22)

Let’s look at the ways that this state can be formed as linear combinations of our states {\lvert {1} \rangle}, and {\lvert {2} \rangle}. That is

\begin{aligned}\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\alpha {\lvert {1} \rangle}+ \beta {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.23)

or

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{(a^2 + b^2) - b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b - \sqrt{a^2 + b^2}\end{bmatrix}+\frac{\beta}{\sqrt{(a^2 + b^2) + b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b + \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.24)

Letting c = \sqrt{a^2 + b^2}, this is

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{bmatrix}i a \\ b - c\end{bmatrix}+\frac{\beta}{\sqrt{c^2 + b c}}\begin{bmatrix}i a \\ b + c\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.25)

We can solve the \alpha and \beta with Cramer’s rule, yielding

\begin{aligned}\begin{vmatrix}1 & i a \\ 1 & b - c\end{vmatrix}&=\frac{\beta}{\sqrt{c^2 + b c}}\begin{vmatrix}i a  & i a \\ b + c & b - c\end{vmatrix} \\ \begin{vmatrix}1 & i a \\ 1 & b + c\end{vmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{vmatrix}i a  & i a \\ b - c & b + c\end{vmatrix},\end{aligned}

or

\begin{aligned}\alpha &= \frac{(b + c - ia)\sqrt{c^2 - b c}}{2 i a c} \\  \beta &= \frac{(b - c - ia)\sqrt{c^2 + b c}}{-2 i a c} \end{aligned} \hspace{\stretch{1}}(3.26)

It is {\left\lvert{\alpha}\right\rvert}^2 and {\left\lvert{\beta}\right\rvert}^2 that are probabilities, and after a bit of algebra we find that those are

\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 = {\left\lvert{\beta}\right\rvert}^2 = \frac{1}{{2}},\end{aligned} \hspace{\stretch{1}}(3.28)

so if the x spin of the system is measured as +1, we have a $50\

Is that what the question was asking? I think that I’ve actually got it backwards. I think that the question was asking for the probability of finding state {\lvert {x+} \rangle} (measuring a spin 1 value for \sigma_x) given the state {\lvert {1} \rangle} or {\lvert {2} \rangle}.

So, suppose that we have

\begin{aligned}\mu_{+} {\lvert {x+} \rangle} + \nu_{+} {\lvert {x-} \rangle} &= {\lvert {1} \rangle} \\ \mu_{-} {\lvert {x+} \rangle} + \nu_{-} {\lvert {x-} \rangle} &= {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.29)

or (considering both cases simultaneously),

\begin{aligned}\mu_{\pm}\begin{bmatrix}1 \\ 1\end{bmatrix}+ \nu_{\pm}\begin{bmatrix}1 \\ -1\end{bmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{bmatrix}i a \\ b \mp c\end{bmatrix} \\ \implies \\ \mu_{\pm}\begin{vmatrix}1 & 1 \\ 1 & -1\end{vmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{vmatrix}i a & 1 \\ b \mp c & -1\end{vmatrix},\end{aligned}

or

\begin{aligned}\mu_{\pm} &= \frac{ia + b \mp c}{2 \sqrt{c^2 \mp bc}} .\end{aligned} \hspace{\stretch{1}}(3.31)

Unsurprisingly, this mirrors the previous scenario and we find that we have a probability {\left\lvert{\mu}\right\rvert}^2 = 1/2 of measuring a spin 1 value for \sigma_x when the state of the operator A has been measured as \pm \sqrt{a^2 + b^2} (ie: in the states {\lvert {1} \rangle}, or {\lvert {2} \rangle} respectively).

No measurement of the operator A = a \sigma_y + b\sigma_z gives a biased prediction of the state of the state \sigma_x. Loosely, this seems to justify calling these operators orthogonal. This is consistent with the geometrical antisymmetric nature of the spin components where we have \sigma_y \sigma_x = -\sigma_x \sigma_y, just like two orthogonal vectors under the Clifford product.

Problem 3.

Statement.

Obtain the expectation values of S_x, S_y, S_z for the case of a spin 1/2 particle with the spin pointed in the direction of a vector with azimuthal angle \beta and polar angle \alpha.

Solution.

Let’s work with \sigma_k instead of S_k to eliminate the \hbar/2 factors. Before considering the expectation values in the arbitrary spin orientation, let’s consider just the expectation values for \sigma_k. Introducing a matrix representation (assumed normalized) for a reference state

\begin{aligned}{\lvert {\psi} \rangle} &= \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.32)

we find

\begin{aligned}{\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} b + b^{*} a\\ {\langle {\psi} \rvert} \sigma_y {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= - i a^{*} b + i b^{*} a \\ {\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} a - b^{*} b \end{aligned} \hspace{\stretch{1}}(3.33)

Each of these expectation values are real as expected due to the Hermitian nature of \sigma_k. We also find that

\begin{aligned}\sum_{k=1}^3 {{\langle {\psi} \rvert} \sigma_k {\lvert {\psi} \rangle}}^2 &= ({\left\lvert{a}\right\rvert}^2 + {\left\lvert{b}\right\rvert}^2)^2 = 1\end{aligned} \hspace{\stretch{1}}(3.36)

So a vector formed with the expectation values as components is a unit vector. This doesn’t seem too unexpected from the section on the projection operators in the text where it was stated that {\langle {\chi} \rvert} \boldsymbol{\sigma} {\lvert {\chi} \rangle} = \mathbf{p}, where \mathbf{p} was a unit vector, and this seems similar. Let’s now consider the arbitrarily oriented spin vector \boldsymbol{\sigma} \cdot \mathbf{n}, and look at its expectation value.

With \mathbf{n} as the the rotated image of \hat{\mathbf{z}} by an azimuthal angle \beta, and polar angle \alpha, we have

\begin{aligned}\mathbf{n} = (\sin\alpha \cos\beta,\sin\alpha \sin\beta,\cos\alpha)\end{aligned} \hspace{\stretch{1}}(3.37)

that is

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &= \sin\alpha \cos\beta \sigma_x + \sin\alpha \sin\beta \sigma_y + \cos\alpha \sigma_z \end{aligned} \hspace{\stretch{1}}(3.38)

The k = x,y,y projections of this operator

\begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_k (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_k\end{aligned} \hspace{\stretch{1}}(3.39)

are just the Pauli matrices scaled by the components of \mathbf{n}

\begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_x (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_x &= \sin\alpha \cos\beta \sigma_x  \\ \frac{1}{{2}} \text{Tr} { \sigma_y (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_y &= \sin\alpha \sin\beta \sigma_y  \\ \frac{1}{{2}} \text{Tr} { \sigma_z (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_z &= \cos\alpha \sigma_z,\end{aligned} \hspace{\stretch{1}}(3.40)

so our S_k expectation values are by inspection

\begin{aligned}{\langle {\psi} \rvert} S_x {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \cos\beta ( a^{*} b + b^{*} a ) \\ {\langle {\psi} \rvert} S_y {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \sin\beta ( - i a^{*} b + i b^{*} a ) \\ {\langle {\psi} \rvert} S_z {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \cos\alpha ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.43)

Is this correct? While (\boldsymbol{\sigma} \cdot \mathbf{n})^2 = \mathbf{n}^2 = I is a unit norm operator, we find that the expectation values of the coordinates of \boldsymbol{\sigma} \cdot \mathbf{n} cannot be viewed as the coordinates of a unit vector. Let’s consider a specific case, with \mathbf{n} = (0,0,1), where the spin is oriented in the x,y plane. That gives us

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} = \sigma_z\end{aligned} \hspace{\stretch{1}}(3.46)

so the expectation values of S_k are

\begin{aligned}\left\langle{{S_x}}\right\rangle &= 0 \\ \left\langle{{S_y}}\right\rangle &= 0 \\ \left\langle{{S_z}}\right\rangle &= \frac{\hbar}{2} ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.47)

Given this is seems reasonable that from 3.43 we find

\begin{aligned}\sum_k {{\langle {\psi} \rvert} S_k {\lvert {\psi} \rangle}}^2 \ne \hbar^2/4,\end{aligned} \hspace{\stretch{1}}(3.50)

(since we don’t have any reason to believe that in general ( a^{*} a - b^{*} b )^2 = 1 is true).

The most general statement we can make about these expectation values (an average observed value for the measurement of the operator) is that

\begin{aligned}{\left\lvert{\left\langle{{S_k}}\right\rangle}\right\rvert} \le \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(3.51)

with equality for specific states and orientations only.

Problem 4.

Statement.

Take the azimuthal angle, \beta = 0, so that the spin is in the
x-z plane at an angle \alpha with respect to the z-axis, and the unit vector is \mathbf{n} = (\sin\alpha, 0, \cos\alpha). Write

\begin{aligned}{\lvert {\chi_{n+}} \rangle} = {\lvert {+\alpha} \rangle}\end{aligned} \hspace{\stretch{1}}(3.52)

for this case. Show that the probability that it is in the spin-up state in the direction \theta with respect to the z-axis is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2 \frac{\alpha - \theta}{2}\end{aligned} \hspace{\stretch{1}}(3.53)

Also obtain the expectation value of \boldsymbol{\sigma} \cdot \mathbf{n} with respect to the state {\lvert {+\theta} \rangle}.

Solution.

For this orientation we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n}&=\sin\alpha \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \cos\alpha \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}=\begin{bmatrix}\cos\alpha & \sin\alpha \\ \sin\alpha & -\cos\alpha\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.54)

Confirmation that our eigenvalues are \pm 1 is simple, and our eigenstates for the +1 eigenvalue is found to be

\begin{aligned}{\lvert {+\alpha} \rangle} \propto \begin{bmatrix}\sin\alpha \\ 1 - \cos\alpha\end{bmatrix}= \begin{bmatrix}\sin\alpha/2 \cos\alpha/2 \\ 2 \sin^2 \alpha/2\end{bmatrix}\propto\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.55)

This last has unit norm, so we can write

\begin{aligned}{\lvert {+\alpha} \rangle} =\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.56)

If the state has been measured to be

\begin{aligned}{\lvert {\phi} \rangle} = 1 {\lvert {+\alpha} \rangle} + 0 {\lvert {-\alpha} \rangle},\end{aligned} \hspace{\stretch{1}}(3.57)

then the probability of a second measurement obtaining {\lvert {+\theta} \rangle} is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{\phi}}\right\rangle }\right\rvert}^2&={\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 .\end{aligned} \hspace{\stretch{1}}(3.58)

Expanding just the inner product first we have

\begin{aligned}\left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_{\alpha/2} \\  S_{\alpha/2} \end{bmatrix} \\ &=S_{\theta/2} S_{\alpha/2} + C_{\theta/2} C_{\alpha/2}  \\ &= \cos\left( \frac{\theta - \alpha}{2} \right)\end{aligned}

So our probability of measuring spin up state {\lvert {+\theta} \rangle} given the state was known to have been in spin up state {\lvert {+\alpha} \rangle} is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2\left( \frac{\theta - \alpha}{2} \right)\end{aligned} \hspace{\stretch{1}}(3.59)

Finally, the expectation value for \boldsymbol{\sigma} \cdot \mathbf{n} with respect to {\lvert {+\theta} \rangle} is

\begin{aligned}\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha & S_\alpha \\ S_\alpha & -C_\alpha\end{bmatrix}\begin{bmatrix}C_{\theta/2} \\ S_{\theta/2} \end{bmatrix} &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha C_{\theta/2} + S_\alpha S_{\theta/2} \\ S_\alpha C_{\theta/2} - C_\alpha S_{\theta/2} \end{bmatrix} \\ &=C_{\theta/2} C_\alpha C_{\theta/2} + C_{\theta/2} S_\alpha S_{\theta/2} + S_{\theta/2} S_\alpha C_{\theta/2} - S_{\theta/2} C_\alpha S_{\theta/2} \\ &=C_\alpha ( C_{\theta/2}^2 -S_{\theta/2}^2 )+ 2 S_\alpha S_{\theta/2} C_{\theta/2} \\ &= C_\alpha C_\theta+ S_\alpha S_\theta \\ &= \cos( \alpha - \theta )\end{aligned}

Sanity checking this we observe that we have +1 as desired for the \alpha = \theta case.

Problem 5.

Statement.

Consider an arbitrary density matrix, \rho, for a spin 1/2 system. Express each matrix element in terms of the ensemble averages [S_i] where i = x,y,z.

Solution.

Let’s omit the spin direction temporarily and write for the density matrix

\begin{aligned}\rho &= w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+w_{-} {\lvert {-} \rangle}{\langle {-} \rvert} \\ &=w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+(1 - w_{+}){\lvert {-} \rangle}{\langle {-} \rvert} \\ &={\lvert {-} \rangle}{\langle {-} \rvert} +w_{+} ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert})\end{aligned}

For the ensemble average (no sum over repeated indexes) we have

\begin{aligned}[S] = \left\langle{{S}}\right\rangle_{av} &= w_{+} {\langle {+} \rvert} S {\lvert {+} \rangle} +w_{-} {\langle {-} \rvert} S {\lvert {-} \rangle} \\ &= \frac{\hbar}{2}( w_{+} -w_{-} ) \\ &= \frac{\hbar}{2}( w_{+} -(1 - w_{+}) ) \\ &= \hbar w_{+} - \frac{1}{{2}}\end{aligned}

This gives us

\begin{aligned}w_{+} = \frac{1}{{\hbar}} [S] + \frac{1}{{2}}\end{aligned}

and our density matrix becomes

\begin{aligned}\rho &=\frac{1}{{2}} ( {\lvert {+} \rangle}{\langle {+} \rvert} +{\lvert {-} \rangle}{\langle {-} \rvert} )+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ &=\frac{1}{{2}} I+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ \end{aligned}

Utilizing

\begin{aligned}{\lvert {x+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {x-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -1\end{bmatrix} \\ {\lvert {y+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {y-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -i\end{bmatrix} \\ {\lvert {z+} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {z-} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}\end{aligned}

We can easily find

\begin{aligned}{\lvert {x+} \rangle}{\langle {x+} \rvert} -{\lvert {x+} \rangle}{\langle {x+} \rvert} &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} = \sigma_x \\ {\lvert {y+} \rangle}{\langle {y+} \rvert} -{\lvert {y+} \rangle}{\langle {y+} \rvert} &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} = \sigma_y \\ {\lvert {z+} \rangle}{\langle {z+} \rvert} -{\lvert {z+} \rangle}{\langle {z+} \rvert} &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} = \sigma_z\end{aligned}

So we can write the density matrix in terms of any of the ensemble averages as

\begin{aligned}\rho =\frac{1}{{2}} I+\frac{1}{{\hbar}} [S_i] \sigma_i=\frac{1}{{2}} (I + [\sigma_i] \sigma_i )\end{aligned}

Alternatively, defining \mathbf{P}_i = [\sigma_i] \mathbf{e}_i, for any of the directions i = 1,2,3 we can write

\begin{aligned}\rho = \frac{1}{{2}} (I + \boldsymbol{\sigma} \cdot \mathbf{P}_i )\end{aligned} \hspace{\stretch{1}}(3.60)

In equation (5.109) we had a similar result in terms of the polarization vector \mathbf{P} = {\langle {\alpha} \rvert} \boldsymbol{\sigma} {\lvert {\alpha} \rangle}, and the individual weights w_\alpha, and w_\beta, but we see here that this (w_\alpha - w_\beta)\mathbf{P} factor can be written exclusively in terms of the ensemble average. Actually, this is also a result in the text, down in (5.113), but we see it here in a more concrete form having picked specific spin directions.

Problem 6.

Statement.

If a Hamiltonian is given by \boldsymbol{\sigma} \cdot \mathbf{n} where \mathbf{n} = (\sin\alpha\cos\beta, \sin\alpha\sin\beta, \cos\alpha), determine the time evolution operator as a 2 x 2 matrix. If a state at t = 0 is given by

\begin{aligned}{\lvert {\phi(0)} \rangle} = \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.61)

then obtain {\lvert {\phi(t)} \rangle}.

Solution.

Before diving into the meat of the problem, observe that a tidy factorization of the Hamiltonian is possible as a composition of rotations. That is

\begin{aligned}H &= \boldsymbol{\sigma} \cdot \mathbf{n} \\ &= \sin\alpha \sigma_1 ( \cos\beta + \sigma_1 \sigma_2 \sin\beta ) + \cos\alpha \sigma_3 \\ &= \sigma_3 \left(\cos\alpha + \sin\alpha \sigma_3 \sigma_1 e^{ i \sigma_3 \beta }\right) \\ &= \sigma_3 \exp\left( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\right)\end{aligned}

So we have for the time evolution operator

\begin{aligned}U(\Delta t) &=\exp( -i \Delta t H /\hbar )= \exp \left(- \frac{\Delta t}{\hbar} i \sigma_3 \exp\Bigl( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\Bigr)\right).\end{aligned} \hspace{\stretch{1}}(3.62)

Does this really help? I guess not, but it is nice and tidy.

Returning to the specifics of the problem, we note that squaring the Hamiltonian produces the identity matrix

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{n})^2 &= I \mathbf{n}^2 = I.\end{aligned} \hspace{\stretch{1}}(3.63)

This allows us to exponentiate H by inspection utilizing

\begin{aligned}e^{i \mu (\boldsymbol{\sigma} \cdot \mathbf{n}) } = I \cos\mu + i (\boldsymbol{\sigma} \cdot \mathbf{n}) \sin\mu\end{aligned} \hspace{\stretch{1}}(3.64)

Writing \sin\mu = S_\mu, and \cos\mu = C_\mu, we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &=\begin{bmatrix}C_\alpha & S_\alpha e^{-i\beta} \\ S_\alpha e^{i\beta} & -C_\alpha\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.65)

and thus

\begin{aligned}U(\Delta t) = \exp( -i \Delta t H /\hbar )=\begin{bmatrix}C_{\Delta t/\hbar} -i S_{\Delta t/\hbar} C_\alpha & -i S_{\Delta t/\hbar} S_\alpha e^{-i\beta} \\ -i S_{\Delta t/\hbar} S_\alpha e^{i\beta} & C_{\Delta t/\hbar} + i S_{\Delta t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.66)

Note that as a sanity check we can calculate that U(\Delta t) U(\Delta t)^\dagger = 1 as expected.

Now for \Delta t = t, we have

\begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\begin{bmatrix}a C_{t/\hbar} -a i S_{t/\hbar} C_\alpha  - b i S_{t/\hbar} S_\alpha e^{-i\beta} \\ -a i S_{t/\hbar} S_\alpha e^{i\beta} + b C_{t/\hbar} + b i S_{t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.67)

It doesn’t seem terribly illuminating to multiply this all out, but we can factor the results slightly to tidy it up. That gives us

\begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\cos(t/\hbar)\begin{bmatrix}a \\ b\end{bmatrix}+ \sin(t/\hbar) \cos\alpha\begin{bmatrix}-a \\ b\end{bmatrix}+ i\sin(t/\hbar) \sin\alpha\begin{bmatrix}b e^{-i\beta} \\ -a e^{i \beta}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.68)

Problem 7.

Statement.

Consider a system of spin 1/2 particles in a mixed ensemble containing a mixture of 25\

Solution.

We have

\begin{aligned}\rho &= \frac{1}{4} {\lvert {z+} \rangle}{\langle {z+} \rvert}+\frac{3}{4} {\lvert {x-} \rangle}{\langle {x-} \rvert} \\ &=\frac{1}{{4}} \begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}+\frac{3}{4} \frac{1}{{2}}\begin{bmatrix}1 \\ -1\end{bmatrix}\begin{bmatrix}1 & -1\end{bmatrix} \\ &=\frac{1}{{4}} \left(\frac{1}{{2}}\begin{bmatrix}2 & 0 \\ 0 & 0\end{bmatrix}+\frac{3}{2}\begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}\right) \\ \end{aligned}

Giving us

\begin{aligned}\rho =\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.69)

Note that we can also factor the identity out of this for

\begin{aligned}\rho &=\frac{1}{{2}}\begin{bmatrix}5/4 & -3/4 \\ -3/4 & 3/4\end{bmatrix}\\ &=\frac{1}{{2}}\left(I +\begin{bmatrix}1/4 & -3/4 \\ -3/4 & -1/4\end{bmatrix}\right)\end{aligned}

which is just:

\begin{aligned}\rho = \frac{1}{{2}} \left( I + \frac{1}{{4}} \sigma_z -\frac{3}{4} \sigma_x \right)\end{aligned} \hspace{\stretch{1}}(3.70)

Recall that the ensemble average is related to the trace of the density and operator product

\begin{aligned}\text{Tr}( \rho A )&=\sum_\beta {\langle {\beta} \rvert} \rho A {\lvert {\beta} \rangle} \\ &=\sum_{\beta} {\langle {\beta} \rvert} \left( \sum_\alpha w_\alpha {\lvert {\alpha} \rangle}{\langle {\alpha} \rvert} \right) A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha \left\langle{{\beta}} \vert {{\alpha}}\right\rangle{\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha {\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \left\langle{{\beta}} \vert {{\alpha}}\right\rangle\\ &=\sum_{\alpha} w_\alpha {\langle {\alpha} \rvert} A \left( \sum_\beta {\lvert {\beta} \rangle} {\langle {\beta} \rvert} \right) {\lvert {\alpha} \rangle}\\ &=\sum_\alpha w_\alpha {\langle {\alpha} \rvert} A {\lvert {\alpha} \rangle}\end{aligned}

But this, by definition of the ensemble average, is just

\begin{aligned}\text{Tr}( \rho A )&=\left\langle{{A}}\right\rangle_{\text{av}}.\end{aligned} \hspace{\stretch{1}}(3.71)

We can use this to compute the ensemble averages of the Pauli matrices

\begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right) = \frac{1}{4} \\ \end{aligned}

We can also find without the explicit matrix multiplication from 3.70

\begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_x + \frac{1}{{4}} \sigma_z \sigma_x -\frac{3}{4} \sigma_x^2\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_y + \frac{1}{{4}} \sigma_z \sigma_y -\frac{3}{4} \sigma_x \sigma_y\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_z + \frac{1}{{4}} \sigma_z^2 -\frac{3}{4} \sigma_x \sigma_z\right) = \frac{1}{{4}}.\end{aligned}

(where to do so we observe that \text{Tr} \sigma_i \sigma_j = 0 for i\ne j and \text{Tr} \sigma_i = 0, and \text{Tr} \sigma_i^2 = 2.)

We see that the traces of the density operator and Pauli matrix products act very much like dot products extracting out the ensemble averages, which end up very much like the magnitudes of the projections in each of the directions.

Problem 8.

Statement.

Show that the quantity \boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p}, when simplified, has a term proportional to \mathbf{L} \cdot \boldsymbol{\sigma}.

Solution.

Consider the operation

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \Psi&=- i \hbar \sigma_k \partial_k V(r) \Psi \\ &=- i \hbar \sigma_k (\partial_k V(r)) \Psi + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} ) \Psi  \\ \end{aligned}

With r = \sqrt{\sum_j x_j^2}, we have

\begin{aligned}\partial_k V(r) = \frac{1}{{2}}\frac{1}{{r}} 2 x_k \frac{\partial {V(r)}}{\partial {r}},\end{aligned}

which gives us the commutator

\begin{aligned}\left[{ \boldsymbol{\sigma} \cdot \mathbf{p}},{V(r)}\right]&=- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) \end{aligned} \hspace{\stretch{1}}(3.72)

Insertion into the operator in question we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p} =- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} ) + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} )^2\end{aligned} \hspace{\stretch{1}}(3.73)

With decomposition of the (\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} ) into symmetric and antisymmetric components, we should have in the second term our \boldsymbol{\sigma} \cdot \mathbf{L}

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )=\frac{1}{{2}} \left\{{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right\}+\frac{1}{{2}} \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]\end{aligned} \hspace{\stretch{1}}(3.74)

where we expect \boldsymbol{\sigma} \cdot \mathbf{L} \propto \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]. Alternately in components

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )&=\sigma_k x_k \sigma_j p_j \\ &=x_k p_k I + \sum_{j\ne k} \sigma_k \sigma_j x_k p_j \\ &=x_k p_k I + i \sum_m \epsilon_{kjm} \sigma_m x_k p_j \\ &=I (\mathbf{x} \cdot \mathbf{p}) + i (\boldsymbol{\sigma} \cdot \mathbf{L})\end{aligned}

Problem 9.

Statement.

Solution.

TODO.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »