Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘ensemble average’

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 6: Volumes in phase space. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 29, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Liouville’s theorem

We’ve looked at the continuity equation of phase space density

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \sum_{i_\alpha} \left(\frac{\partial {}}{\partial {p_{i_\alpha}}} \left( \rho \dot{p}_{i_\alpha} \right) + \frac{\partial {\left( \rho \dot{x}_{i_\alpha} \right) }}{\partial {x_{i_\alpha}}}\right)\end{aligned} \hspace{\stretch{1}}(1.2.1)

which with

\begin{aligned}\frac{\partial {\dot{p}_{i_\alpha}}}{\partial {p_{i_\alpha}}} + \frac{\partial {\dot{x}_{i_\alpha}}}{\partial {x_{i_\alpha}}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.2)

led us to Liouville’s theorem

\begin{aligned}\\ boxed{\frac{d{{\rho}}}{dt}(x, p, t) = 0}.\end{aligned} \hspace{\stretch{1}}(1.2.3)

We define Ergodic, meaning that with time, as you wait for t \rightarrow \infty, all available phase space will be covered. Not all systems are necessarily ergodic, but the hope is that all sufficiently complicated systems will be so.

We hope that

\begin{aligned}\rho(x, p, t \rightarrow \infty) \implies \frac{\partial {\rho}}{\partial {t}} = 0 \qquad \mbox{in steady state}\end{aligned} \hspace{\stretch{1}}(1.2.4)

In particular for \rho = \text{constant}, we see that our continuity equation 1.2.1 results in 1.2.2.

For example in a SHO system with a cyclic phase space, as in (Fig 1).

Fig 1: Phase space volume trajectory

 

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int_0^\tau dt A( x_0(t), p_0(t) ),\end{aligned} \hspace{\stretch{1}}(1.2.5)

or equivalently with an ensemble average, imagining that we are averaging over a number of different systems

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int dx dp A( x, p ) \underbrace{\rho(x, p)}_{\text{constant}}\end{aligned} \hspace{\stretch{1}}(1.2.6)

If we say that

\begin{aligned}\rho(x, p) = \text{constant} = \frac{1}{{\Omega}},\end{aligned} \hspace{\stretch{1}}(1.2.7)

so that

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\Omega}} \int dx dp A( x, p ) \end{aligned} \hspace{\stretch{1}}(1.2.8)

then what is this constant. We fix this by the constraint

\begin{aligned}\int dx dp \rho(x, p) = 1\end{aligned} \hspace{\stretch{1}}(1.2.9)

So, \Omega is the allowed “volume” of phase space, the number of states that the system can take that is consistent with conservation of energy.

What’s the probability for a given configuration. We’ll have to enumerate all the possible configurations. For a coin toss example, we can also ask how many configurations exist where the sum of “coin tosses” are fixed.

A worked example: Ideal gas calculation of \Omega

  • N gas atoms at phase space points \mathbf{x}_i, \mathbf{p}_i
  • constrained to volume V
  • Energy fixed at E.

\begin{aligned}\Omega(N, V, E) = \int_V d\mathbf{x}_1 d\mathbf{x}_2 \cdots d\mathbf{x}_N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)=\underbrace{V^N}_{\text{Real space volume, not N dimensional ``volume''}} \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.10)

With \gamma defined implicitly by

\begin{aligned}\frac{d\gamma}{dE} = \Omega\end{aligned} \hspace{\stretch{1}}(1.3.11)

so that with Heavyside theta as in (Fig 2).

\begin{aligned}\Theta(x) = \left\{\begin{array}{l l}1 & \quad x \ge 0 \\ 0 & \quad x < 0\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.12a)

\begin{aligned}\frac{d\Theta}{dx} = \delta(x),\end{aligned} \hspace{\stretch{1}}(1.0.12b)

Fig 2: Heavyside theta

 

we have

\begin{aligned}\gamma(N, V, E) = V^N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.13)

In three dimensions (p_x, p_y, p_z), the dimension of momentum part of the phase space is 3. In general the dimension of the space is 3N. Here

\begin{aligned}\int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

is the volume of a “sphere” in 3N– dimensions, which we found in the problem set to be

\begin{aligned}V_{m} = \frac{ \pi^{m/2} R^{m} }{ \Gamma\left( m/2 + 1 \right)}.\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\Gamma(x) = \int_0^\infty dy e^{-y} y^{x-1}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\Gamma(x + 1) = x \Gamma(x) = x!\end{aligned} \hspace{\stretch{1}}(1.0.15c)

Since we have

\begin{aligned}\mathbf{p}_1^2 + \cdots \mathbf{p}_N^2 \le 2 m E\end{aligned} \hspace{\stretch{1}}(1.0.16)

the radius is

\begin{aligned}\text{radius} = \sqrt{ 2 m E}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

This gives

\begin{aligned}\gamma(N, V, E) = V^N \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 + 1) }= V^N \frac{2}{3N} \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 ) },\end{aligned} \hspace{\stretch{1}}(1.0.17)

and

\begin{aligned}\Omega(N, V, E) = V^N \pi^{3 N/2} ( 2 m E)^{3 N/2 - 1} \frac{2 m}{\Gamma( 3N/2 ) }\end{aligned} \hspace{\stretch{1}}(1.0.19)

This result is almost correct, and we have to correct in 2 ways. We have to fix the counting since we need an assumption that all the particles are indistinguishable.

  • Indistinguishability. We must divide by N!.
  • \Omega is not dimensionless. We need to divide by h^{3N}, where h is Plank’s constant.

In the real world we have to consider this as a quantum mechanical system. Imagine a two dimensional phase space. The allowed points are illustrated in (Fig 3).

Fig 3: Phase space volume adjustment for the uncertainty principle

 

Since \Delta x \Delta p \sim \hbar, the question of how many boxes there are, we calculate the total volume, and then divide by the volume of each box. This sort of handwaving wouldn’t be required if we did a proper quantum mechanical treatment.

The corrected result is

\begin{aligned}\boxed{\Omega_{\mathrm{correct}} = \frac{V^N}{N!} \frac{1}{{h^{3N}}} \frac{( 2 \pi m E)^{3 N/2 }}{E} \frac{1}{\Gamma( 3N/2 ) }}\end{aligned} \hspace{\stretch{1}}(1.0.20)

To come

We’ll look at entropy

\begin{aligned}\underbrace{S}_{\text{Entropy}} = \underbrace{k_{\mathrm{B}}}_{\text{Boltzmann's constant}} \ln \underbrace{\Omega_{\mathrm{correct}}}_{\text{phase space volume (number of configurations)}}\end{aligned} \hspace{\stretch{1}}(1.0.21)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , | Leave a Comment »

Notes and problems for Desai Chapter V.

Posted by peeterjoot on November 8, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter V notes for [1].

Notes

Problems

Problem 1.

Statement.

Obtain S_x, S_y, S_z for spin 1 in the representation in which S_z and S^2 are diagonal.

Solution.

For spin 1, we have

\begin{aligned}S^2 = 1 (1+1) \hbar^2 \mathbf{1}\end{aligned} \hspace{\stretch{1}}(3.1)

and are interested in the states {\lvert {1,-1} \rangle}, {\lvert {1, 0} \rangle}, and {\lvert {1,1} \rangle}. If, like angular momentum, we assume that we have for m_s = -1,0,1

\begin{aligned}S_z {\lvert {1,m_s} \rangle} = m_s \hbar {\lvert {1, m_s} \rangle}\end{aligned} \hspace{\stretch{1}}(3.2)

and introduce a column matrix representations for the kets as follows

\begin{aligned}{\lvert {1,1} \rangle} &=\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle} &=\begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix} \\ {\lvert {1,-1} \rangle} &=\begin{bmatrix}0 \\ 0 \\ -1\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.3)

then we have, by inspection

\begin{aligned}S_z &= \hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.6)

Note that, like the Pauli matrices, and unlike angular momentum, the spin states {\lvert {-1, m_s} \rangle}, {\lvert {0, m_s} \rangle} have not been considered. Do those have any physical interpretation?

That question aside, we can proceed as in the text, utilizing the ladder operator commutators

\begin{aligned}S_{\pm} &= S_x \pm i S_y,\end{aligned} \hspace{\stretch{1}}(3.7)

to determine the values of S_x and S_y indirectly. We find

\begin{aligned}\left[{S_{+}},{S_{-}}\right] &= 2 \hbar S_z \\ \left[{S_{+}},{S_{z}}\right] &= -\hbar S_{+} \\ \left[{S_{-}},{S_{z}}\right] &= \hbar S_{-}.\end{aligned} \hspace{\stretch{1}}(3.8)

Let

\begin{aligned}S_{+} &=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.11)

Looking for equality between \left[{S_{z}},{S_{+}}\right]/\hbar = S_{+}, we find

\begin{aligned}\begin{bmatrix}0 & b & 2 c \\ -d & 0 & f \\ -2g & -h & 0\end{bmatrix}&=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.12)

so we must have

\begin{aligned}S_{+} &=\begin{bmatrix}0 & b & 0 \\ 0 & 0 & f \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.13)

Furthermore, from \left[{S_{+}},{S_{-}}\right] = 2 \hbar S_z, we find

\begin{aligned}\begin{bmatrix}{\left\lvert{b}\right\rvert}^2 & 0 & 0 \\ 0 \right\rvert}^2 - {\left\lvert{b}\right\rvert}^2 & 0 \\ 0 & 0 & -{\left\lvert{f}\right\rvert}^2\end{bmatrix} &= 2 \hbar^2\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.14)

We must have {\left\lvert{b}\right\rvert}^2 = {\left\lvert{f}\right\rvert}^2 = 2 \hbar^2. We could probably pick any
b = \sqrt{2} \hbar e^{i\phi}, and f = \sqrt{2} \hbar e^{i\theta}, but assuming we have no reason for a non-zero phase we try

\begin{aligned}S_{+}&=\sqrt{2} \hbar\begin{bmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.15)

Putting all the pieces back together, with S_x = (S_{+} + S_{-})/2, and S_y = (S_{+} - S_{-})/2i, we finally have

\begin{aligned}S_x &=\frac{\hbar}{\sqrt{2}}\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0\end{bmatrix} \\ S_y &=\frac{\hbar}{\sqrt{2} i}\begin{bmatrix}0 & 1 & 0 \\ -1 & 0 & 1 \\ 0 & -1 & 0\end{bmatrix} \\ S_z &=\hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.16)

A quick calculation verifies that we have S_x^2 + S_y^2 + S_z^2 = 2 \hbar \mathbf{1}, as expected.

Problem 2.

Statement.

Obtain eigensolution for operator A = a \sigma_y + b \sigma_z. Call the eigenstates {\lvert {1} \rangle} and {\lvert {2} \rangle}, and determine the probabilities that they will correspond to \sigma_x = +1.

Solution.

The first part is straight forward, and we have

\begin{aligned}A &= a \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + b \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \\ &=\begin{bmatrix}b & -i a \\ ia & -b\end{bmatrix}.\end{aligned}

Taking {\left\lvert{A - \lambda I}\right\rvert} = 0 we get

\begin{aligned}\lambda &= \pm \sqrt{a^2 + b^2},\end{aligned} \hspace{\stretch{1}}(3.19)

with eigenvectors proportional to

\begin{aligned}{\lvert {\pm} \rangle} &=\begin{bmatrix}i a \\ b \mp \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.20)

The normalization constant is 1/\sqrt{2 (a^2 + b^2) \mp 2 b \sqrt{a^2 + b^2}}. Now we can call these {\lvert {1} \rangle}, and {\lvert {2} \rangle} but what does the last part of the question mean? What’s meant by \sigma_x = +1?

Asking the prof about this, he says:

“I think it means that the result of a measurement of the x component of spin is +1. This corresponds to the eigenvalue of \sigma_x being +1. The spin operator S_x has eigenvalue +\hbar/2”.

Aside: Question to consider later. Is is significant that {\langle {1} \rvert} \sigma_x {\lvert {1} \rangle} = {\langle {2} \rvert} \sigma_x {\lvert {2} \rangle} = 0?

So, how do we translate this into a mathematical statement?

First let’s recall a couple of details. Recall that the x spin operator has the matrix representation

\begin{aligned}\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.21)

This has eigenvalues \pm 1, with eigenstates (1,\pm 1)/\sqrt{2}. At the point when the x component spin is observed to be +1, the state of the system was then

\begin{aligned}{\lvert {x+} \rangle} =\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.22)

Let’s look at the ways that this state can be formed as linear combinations of our states {\lvert {1} \rangle}, and {\lvert {2} \rangle}. That is

\begin{aligned}\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\alpha {\lvert {1} \rangle}+ \beta {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.23)

or

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{(a^2 + b^2) - b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b - \sqrt{a^2 + b^2}\end{bmatrix}+\frac{\beta}{\sqrt{(a^2 + b^2) + b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b + \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.24)

Letting c = \sqrt{a^2 + b^2}, this is

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{bmatrix}i a \\ b - c\end{bmatrix}+\frac{\beta}{\sqrt{c^2 + b c}}\begin{bmatrix}i a \\ b + c\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.25)

We can solve the \alpha and \beta with Cramer’s rule, yielding

\begin{aligned}\begin{vmatrix}1 & i a \\ 1 & b - c\end{vmatrix}&=\frac{\beta}{\sqrt{c^2 + b c}}\begin{vmatrix}i a  & i a \\ b + c & b - c\end{vmatrix} \\ \begin{vmatrix}1 & i a \\ 1 & b + c\end{vmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{vmatrix}i a  & i a \\ b - c & b + c\end{vmatrix},\end{aligned}

or

\begin{aligned}\alpha &= \frac{(b + c - ia)\sqrt{c^2 - b c}}{2 i a c} \\  \beta &= \frac{(b - c - ia)\sqrt{c^2 + b c}}{-2 i a c} \end{aligned} \hspace{\stretch{1}}(3.26)

It is {\left\lvert{\alpha}\right\rvert}^2 and {\left\lvert{\beta}\right\rvert}^2 that are probabilities, and after a bit of algebra we find that those are

\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 = {\left\lvert{\beta}\right\rvert}^2 = \frac{1}{{2}},\end{aligned} \hspace{\stretch{1}}(3.28)

so if the x spin of the system is measured as +1, we have a $50\

Is that what the question was asking? I think that I’ve actually got it backwards. I think that the question was asking for the probability of finding state {\lvert {x+} \rangle} (measuring a spin 1 value for \sigma_x) given the state {\lvert {1} \rangle} or {\lvert {2} \rangle}.

So, suppose that we have

\begin{aligned}\mu_{+} {\lvert {x+} \rangle} + \nu_{+} {\lvert {x-} \rangle} &= {\lvert {1} \rangle} \\ \mu_{-} {\lvert {x+} \rangle} + \nu_{-} {\lvert {x-} \rangle} &= {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.29)

or (considering both cases simultaneously),

\begin{aligned}\mu_{\pm}\begin{bmatrix}1 \\ 1\end{bmatrix}+ \nu_{\pm}\begin{bmatrix}1 \\ -1\end{bmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{bmatrix}i a \\ b \mp c\end{bmatrix} \\ \implies \\ \mu_{\pm}\begin{vmatrix}1 & 1 \\ 1 & -1\end{vmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{vmatrix}i a & 1 \\ b \mp c & -1\end{vmatrix},\end{aligned}

or

\begin{aligned}\mu_{\pm} &= \frac{ia + b \mp c}{2 \sqrt{c^2 \mp bc}} .\end{aligned} \hspace{\stretch{1}}(3.31)

Unsurprisingly, this mirrors the previous scenario and we find that we have a probability {\left\lvert{\mu}\right\rvert}^2 = 1/2 of measuring a spin 1 value for \sigma_x when the state of the operator A has been measured as \pm \sqrt{a^2 + b^2} (ie: in the states {\lvert {1} \rangle}, or {\lvert {2} \rangle} respectively).

No measurement of the operator A = a \sigma_y + b\sigma_z gives a biased prediction of the state of the state \sigma_x. Loosely, this seems to justify calling these operators orthogonal. This is consistent with the geometrical antisymmetric nature of the spin components where we have \sigma_y \sigma_x = -\sigma_x \sigma_y, just like two orthogonal vectors under the Clifford product.

Problem 3.

Statement.

Obtain the expectation values of S_x, S_y, S_z for the case of a spin 1/2 particle with the spin pointed in the direction of a vector with azimuthal angle \beta and polar angle \alpha.

Solution.

Let’s work with \sigma_k instead of S_k to eliminate the \hbar/2 factors. Before considering the expectation values in the arbitrary spin orientation, let’s consider just the expectation values for \sigma_k. Introducing a matrix representation (assumed normalized) for a reference state

\begin{aligned}{\lvert {\psi} \rangle} &= \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.32)

we find

\begin{aligned}{\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} b + b^{*} a\\ {\langle {\psi} \rvert} \sigma_y {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= - i a^{*} b + i b^{*} a \\ {\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} a - b^{*} b \end{aligned} \hspace{\stretch{1}}(3.33)

Each of these expectation values are real as expected due to the Hermitian nature of \sigma_k. We also find that

\begin{aligned}\sum_{k=1}^3 {{\langle {\psi} \rvert} \sigma_k {\lvert {\psi} \rangle}}^2 &= ({\left\lvert{a}\right\rvert}^2 + {\left\lvert{b}\right\rvert}^2)^2 = 1\end{aligned} \hspace{\stretch{1}}(3.36)

So a vector formed with the expectation values as components is a unit vector. This doesn’t seem too unexpected from the section on the projection operators in the text where it was stated that {\langle {\chi} \rvert} \boldsymbol{\sigma} {\lvert {\chi} \rangle} = \mathbf{p}, where \mathbf{p} was a unit vector, and this seems similar. Let’s now consider the arbitrarily oriented spin vector \boldsymbol{\sigma} \cdot \mathbf{n}, and look at its expectation value.

With \mathbf{n} as the the rotated image of \hat{\mathbf{z}} by an azimuthal angle \beta, and polar angle \alpha, we have

\begin{aligned}\mathbf{n} = (\sin\alpha \cos\beta,\sin\alpha \sin\beta,\cos\alpha)\end{aligned} \hspace{\stretch{1}}(3.37)

that is

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &= \sin\alpha \cos\beta \sigma_x + \sin\alpha \sin\beta \sigma_y + \cos\alpha \sigma_z \end{aligned} \hspace{\stretch{1}}(3.38)

The k = x,y,y projections of this operator

\begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_k (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_k\end{aligned} \hspace{\stretch{1}}(3.39)

are just the Pauli matrices scaled by the components of \mathbf{n}

\begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_x (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_x &= \sin\alpha \cos\beta \sigma_x  \\ \frac{1}{{2}} \text{Tr} { \sigma_y (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_y &= \sin\alpha \sin\beta \sigma_y  \\ \frac{1}{{2}} \text{Tr} { \sigma_z (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_z &= \cos\alpha \sigma_z,\end{aligned} \hspace{\stretch{1}}(3.40)

so our S_k expectation values are by inspection

\begin{aligned}{\langle {\psi} \rvert} S_x {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \cos\beta ( a^{*} b + b^{*} a ) \\ {\langle {\psi} \rvert} S_y {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \sin\beta ( - i a^{*} b + i b^{*} a ) \\ {\langle {\psi} \rvert} S_z {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \cos\alpha ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.43)

Is this correct? While (\boldsymbol{\sigma} \cdot \mathbf{n})^2 = \mathbf{n}^2 = I is a unit norm operator, we find that the expectation values of the coordinates of \boldsymbol{\sigma} \cdot \mathbf{n} cannot be viewed as the coordinates of a unit vector. Let’s consider a specific case, with \mathbf{n} = (0,0,1), where the spin is oriented in the x,y plane. That gives us

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} = \sigma_z\end{aligned} \hspace{\stretch{1}}(3.46)

so the expectation values of S_k are

\begin{aligned}\left\langle{{S_x}}\right\rangle &= 0 \\ \left\langle{{S_y}}\right\rangle &= 0 \\ \left\langle{{S_z}}\right\rangle &= \frac{\hbar}{2} ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.47)

Given this is seems reasonable that from 3.43 we find

\begin{aligned}\sum_k {{\langle {\psi} \rvert} S_k {\lvert {\psi} \rangle}}^2 \ne \hbar^2/4,\end{aligned} \hspace{\stretch{1}}(3.50)

(since we don’t have any reason to believe that in general ( a^{*} a - b^{*} b )^2 = 1 is true).

The most general statement we can make about these expectation values (an average observed value for the measurement of the operator) is that

\begin{aligned}{\left\lvert{\left\langle{{S_k}}\right\rangle}\right\rvert} \le \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(3.51)

with equality for specific states and orientations only.

Problem 4.

Statement.

Take the azimuthal angle, \beta = 0, so that the spin is in the
x-z plane at an angle \alpha with respect to the z-axis, and the unit vector is \mathbf{n} = (\sin\alpha, 0, \cos\alpha). Write

\begin{aligned}{\lvert {\chi_{n+}} \rangle} = {\lvert {+\alpha} \rangle}\end{aligned} \hspace{\stretch{1}}(3.52)

for this case. Show that the probability that it is in the spin-up state in the direction \theta with respect to the z-axis is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2 \frac{\alpha - \theta}{2}\end{aligned} \hspace{\stretch{1}}(3.53)

Also obtain the expectation value of \boldsymbol{\sigma} \cdot \mathbf{n} with respect to the state {\lvert {+\theta} \rangle}.

Solution.

For this orientation we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n}&=\sin\alpha \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \cos\alpha \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}=\begin{bmatrix}\cos\alpha & \sin\alpha \\ \sin\alpha & -\cos\alpha\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.54)

Confirmation that our eigenvalues are \pm 1 is simple, and our eigenstates for the +1 eigenvalue is found to be

\begin{aligned}{\lvert {+\alpha} \rangle} \propto \begin{bmatrix}\sin\alpha \\ 1 - \cos\alpha\end{bmatrix}= \begin{bmatrix}\sin\alpha/2 \cos\alpha/2 \\ 2 \sin^2 \alpha/2\end{bmatrix}\propto\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.55)

This last has unit norm, so we can write

\begin{aligned}{\lvert {+\alpha} \rangle} =\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.56)

If the state has been measured to be

\begin{aligned}{\lvert {\phi} \rangle} = 1 {\lvert {+\alpha} \rangle} + 0 {\lvert {-\alpha} \rangle},\end{aligned} \hspace{\stretch{1}}(3.57)

then the probability of a second measurement obtaining {\lvert {+\theta} \rangle} is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{\phi}}\right\rangle }\right\rvert}^2&={\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 .\end{aligned} \hspace{\stretch{1}}(3.58)

Expanding just the inner product first we have

\begin{aligned}\left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_{\alpha/2} \\  S_{\alpha/2} \end{bmatrix} \\ &=S_{\theta/2} S_{\alpha/2} + C_{\theta/2} C_{\alpha/2}  \\ &= \cos\left( \frac{\theta - \alpha}{2} \right)\end{aligned}

So our probability of measuring spin up state {\lvert {+\theta} \rangle} given the state was known to have been in spin up state {\lvert {+\alpha} \rangle} is

\begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2\left( \frac{\theta - \alpha}{2} \right)\end{aligned} \hspace{\stretch{1}}(3.59)

Finally, the expectation value for \boldsymbol{\sigma} \cdot \mathbf{n} with respect to {\lvert {+\theta} \rangle} is

\begin{aligned}\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha & S_\alpha \\ S_\alpha & -C_\alpha\end{bmatrix}\begin{bmatrix}C_{\theta/2} \\ S_{\theta/2} \end{bmatrix} &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha C_{\theta/2} + S_\alpha S_{\theta/2} \\ S_\alpha C_{\theta/2} - C_\alpha S_{\theta/2} \end{bmatrix} \\ &=C_{\theta/2} C_\alpha C_{\theta/2} + C_{\theta/2} S_\alpha S_{\theta/2} + S_{\theta/2} S_\alpha C_{\theta/2} - S_{\theta/2} C_\alpha S_{\theta/2} \\ &=C_\alpha ( C_{\theta/2}^2 -S_{\theta/2}^2 )+ 2 S_\alpha S_{\theta/2} C_{\theta/2} \\ &= C_\alpha C_\theta+ S_\alpha S_\theta \\ &= \cos( \alpha - \theta )\end{aligned}

Sanity checking this we observe that we have +1 as desired for the \alpha = \theta case.

Problem 5.

Statement.

Consider an arbitrary density matrix, \rho, for a spin 1/2 system. Express each matrix element in terms of the ensemble averages [S_i] where i = x,y,z.

Solution.

Let’s omit the spin direction temporarily and write for the density matrix

\begin{aligned}\rho &= w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+w_{-} {\lvert {-} \rangle}{\langle {-} \rvert} \\ &=w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+(1 - w_{+}){\lvert {-} \rangle}{\langle {-} \rvert} \\ &={\lvert {-} \rangle}{\langle {-} \rvert} +w_{+} ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert})\end{aligned}

For the ensemble average (no sum over repeated indexes) we have

\begin{aligned}[S] = \left\langle{{S}}\right\rangle_{av} &= w_{+} {\langle {+} \rvert} S {\lvert {+} \rangle} +w_{-} {\langle {-} \rvert} S {\lvert {-} \rangle} \\ &= \frac{\hbar}{2}( w_{+} -w_{-} ) \\ &= \frac{\hbar}{2}( w_{+} -(1 - w_{+}) ) \\ &= \hbar w_{+} - \frac{1}{{2}}\end{aligned}

This gives us

\begin{aligned}w_{+} = \frac{1}{{\hbar}} [S] + \frac{1}{{2}}\end{aligned}

and our density matrix becomes

\begin{aligned}\rho &=\frac{1}{{2}} ( {\lvert {+} \rangle}{\langle {+} \rvert} +{\lvert {-} \rangle}{\langle {-} \rvert} )+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ &=\frac{1}{{2}} I+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ \end{aligned}

Utilizing

\begin{aligned}{\lvert {x+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {x-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -1\end{bmatrix} \\ {\lvert {y+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {y-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -i\end{bmatrix} \\ {\lvert {z+} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {z-} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}\end{aligned}

We can easily find

\begin{aligned}{\lvert {x+} \rangle}{\langle {x+} \rvert} -{\lvert {x+} \rangle}{\langle {x+} \rvert} &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} = \sigma_x \\ {\lvert {y+} \rangle}{\langle {y+} \rvert} -{\lvert {y+} \rangle}{\langle {y+} \rvert} &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} = \sigma_y \\ {\lvert {z+} \rangle}{\langle {z+} \rvert} -{\lvert {z+} \rangle}{\langle {z+} \rvert} &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} = \sigma_z\end{aligned}

So we can write the density matrix in terms of any of the ensemble averages as

\begin{aligned}\rho =\frac{1}{{2}} I+\frac{1}{{\hbar}} [S_i] \sigma_i=\frac{1}{{2}} (I + [\sigma_i] \sigma_i )\end{aligned}

Alternatively, defining \mathbf{P}_i = [\sigma_i] \mathbf{e}_i, for any of the directions i = 1,2,3 we can write

\begin{aligned}\rho = \frac{1}{{2}} (I + \boldsymbol{\sigma} \cdot \mathbf{P}_i )\end{aligned} \hspace{\stretch{1}}(3.60)

In equation (5.109) we had a similar result in terms of the polarization vector \mathbf{P} = {\langle {\alpha} \rvert} \boldsymbol{\sigma} {\lvert {\alpha} \rangle}, and the individual weights w_\alpha, and w_\beta, but we see here that this (w_\alpha - w_\beta)\mathbf{P} factor can be written exclusively in terms of the ensemble average. Actually, this is also a result in the text, down in (5.113), but we see it here in a more concrete form having picked specific spin directions.

Problem 6.

Statement.

If a Hamiltonian is given by \boldsymbol{\sigma} \cdot \mathbf{n} where \mathbf{n} = (\sin\alpha\cos\beta, \sin\alpha\sin\beta, \cos\alpha), determine the time evolution operator as a 2 x 2 matrix. If a state at t = 0 is given by

\begin{aligned}{\lvert {\phi(0)} \rangle} = \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.61)

then obtain {\lvert {\phi(t)} \rangle}.

Solution.

Before diving into the meat of the problem, observe that a tidy factorization of the Hamiltonian is possible as a composition of rotations. That is

\begin{aligned}H &= \boldsymbol{\sigma} \cdot \mathbf{n} \\ &= \sin\alpha \sigma_1 ( \cos\beta + \sigma_1 \sigma_2 \sin\beta ) + \cos\alpha \sigma_3 \\ &= \sigma_3 \left(\cos\alpha + \sin\alpha \sigma_3 \sigma_1 e^{ i \sigma_3 \beta }\right) \\ &= \sigma_3 \exp\left( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\right)\end{aligned}

So we have for the time evolution operator

\begin{aligned}U(\Delta t) &=\exp( -i \Delta t H /\hbar )= \exp \left(- \frac{\Delta t}{\hbar} i \sigma_3 \exp\Bigl( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\Bigr)\right).\end{aligned} \hspace{\stretch{1}}(3.62)

Does this really help? I guess not, but it is nice and tidy.

Returning to the specifics of the problem, we note that squaring the Hamiltonian produces the identity matrix

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{n})^2 &= I \mathbf{n}^2 = I.\end{aligned} \hspace{\stretch{1}}(3.63)

This allows us to exponentiate H by inspection utilizing

\begin{aligned}e^{i \mu (\boldsymbol{\sigma} \cdot \mathbf{n}) } = I \cos\mu + i (\boldsymbol{\sigma} \cdot \mathbf{n}) \sin\mu\end{aligned} \hspace{\stretch{1}}(3.64)

Writing \sin\mu = S_\mu, and \cos\mu = C_\mu, we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &=\begin{bmatrix}C_\alpha & S_\alpha e^{-i\beta} \\ S_\alpha e^{i\beta} & -C_\alpha\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.65)

and thus

\begin{aligned}U(\Delta t) = \exp( -i \Delta t H /\hbar )=\begin{bmatrix}C_{\Delta t/\hbar} -i S_{\Delta t/\hbar} C_\alpha & -i S_{\Delta t/\hbar} S_\alpha e^{-i\beta} \\ -i S_{\Delta t/\hbar} S_\alpha e^{i\beta} & C_{\Delta t/\hbar} + i S_{\Delta t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.66)

Note that as a sanity check we can calculate that U(\Delta t) U(\Delta t)^\dagger = 1 as expected.

Now for \Delta t = t, we have

\begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\begin{bmatrix}a C_{t/\hbar} -a i S_{t/\hbar} C_\alpha  - b i S_{t/\hbar} S_\alpha e^{-i\beta} \\ -a i S_{t/\hbar} S_\alpha e^{i\beta} + b C_{t/\hbar} + b i S_{t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.67)

It doesn’t seem terribly illuminating to multiply this all out, but we can factor the results slightly to tidy it up. That gives us

\begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\cos(t/\hbar)\begin{bmatrix}a \\ b\end{bmatrix}+ \sin(t/\hbar) \cos\alpha\begin{bmatrix}-a \\ b\end{bmatrix}+ i\sin(t/\hbar) \sin\alpha\begin{bmatrix}b e^{-i\beta} \\ -a e^{i \beta}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.68)

Problem 7.

Statement.

Consider a system of spin 1/2 particles in a mixed ensemble containing a mixture of 25\

Solution.

We have

\begin{aligned}\rho &= \frac{1}{4} {\lvert {z+} \rangle}{\langle {z+} \rvert}+\frac{3}{4} {\lvert {x-} \rangle}{\langle {x-} \rvert} \\ &=\frac{1}{{4}} \begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}+\frac{3}{4} \frac{1}{{2}}\begin{bmatrix}1 \\ -1\end{bmatrix}\begin{bmatrix}1 & -1\end{bmatrix} \\ &=\frac{1}{{4}} \left(\frac{1}{{2}}\begin{bmatrix}2 & 0 \\ 0 & 0\end{bmatrix}+\frac{3}{2}\begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}\right) \\ \end{aligned}

Giving us

\begin{aligned}\rho =\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.69)

Note that we can also factor the identity out of this for

\begin{aligned}\rho &=\frac{1}{{2}}\begin{bmatrix}5/4 & -3/4 \\ -3/4 & 3/4\end{bmatrix}\\ &=\frac{1}{{2}}\left(I +\begin{bmatrix}1/4 & -3/4 \\ -3/4 & -1/4\end{bmatrix}\right)\end{aligned}

which is just:

\begin{aligned}\rho = \frac{1}{{2}} \left( I + \frac{1}{{4}} \sigma_z -\frac{3}{4} \sigma_x \right)\end{aligned} \hspace{\stretch{1}}(3.70)

Recall that the ensemble average is related to the trace of the density and operator product

\begin{aligned}\text{Tr}( \rho A )&=\sum_\beta {\langle {\beta} \rvert} \rho A {\lvert {\beta} \rangle} \\ &=\sum_{\beta} {\langle {\beta} \rvert} \left( \sum_\alpha w_\alpha {\lvert {\alpha} \rangle}{\langle {\alpha} \rvert} \right) A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha \left\langle{{\beta}} \vert {{\alpha}}\right\rangle{\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha {\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \left\langle{{\beta}} \vert {{\alpha}}\right\rangle\\ &=\sum_{\alpha} w_\alpha {\langle {\alpha} \rvert} A \left( \sum_\beta {\lvert {\beta} \rangle} {\langle {\beta} \rvert} \right) {\lvert {\alpha} \rangle}\\ &=\sum_\alpha w_\alpha {\langle {\alpha} \rvert} A {\lvert {\alpha} \rangle}\end{aligned}

But this, by definition of the ensemble average, is just

\begin{aligned}\text{Tr}( \rho A )&=\left\langle{{A}}\right\rangle_{\text{av}}.\end{aligned} \hspace{\stretch{1}}(3.71)

We can use this to compute the ensemble averages of the Pauli matrices

\begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right) = \frac{1}{4} \\ \end{aligned}

We can also find without the explicit matrix multiplication from 3.70

\begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_x + \frac{1}{{4}} \sigma_z \sigma_x -\frac{3}{4} \sigma_x^2\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_y + \frac{1}{{4}} \sigma_z \sigma_y -\frac{3}{4} \sigma_x \sigma_y\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_z + \frac{1}{{4}} \sigma_z^2 -\frac{3}{4} \sigma_x \sigma_z\right) = \frac{1}{{4}}.\end{aligned}

(where to do so we observe that \text{Tr} \sigma_i \sigma_j = 0 for i\ne j and \text{Tr} \sigma_i = 0, and \text{Tr} \sigma_i^2 = 2.)

We see that the traces of the density operator and Pauli matrix products act very much like dot products extracting out the ensemble averages, which end up very much like the magnitudes of the projections in each of the directions.

Problem 8.

Statement.

Show that the quantity \boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p}, when simplified, has a term proportional to \mathbf{L} \cdot \boldsymbol{\sigma}.

Solution.

Consider the operation

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \Psi&=- i \hbar \sigma_k \partial_k V(r) \Psi \\ &=- i \hbar \sigma_k (\partial_k V(r)) \Psi + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} ) \Psi  \\ \end{aligned}

With r = \sqrt{\sum_j x_j^2}, we have

\begin{aligned}\partial_k V(r) = \frac{1}{{2}}\frac{1}{{r}} 2 x_k \frac{\partial {V(r)}}{\partial {r}},\end{aligned}

which gives us the commutator

\begin{aligned}\left[{ \boldsymbol{\sigma} \cdot \mathbf{p}},{V(r)}\right]&=- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) \end{aligned} \hspace{\stretch{1}}(3.72)

Insertion into the operator in question we have

\begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p} =- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} ) + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} )^2\end{aligned} \hspace{\stretch{1}}(3.73)

With decomposition of the (\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} ) into symmetric and antisymmetric components, we should have in the second term our \boldsymbol{\sigma} \cdot \mathbf{L}

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )=\frac{1}{{2}} \left\{{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right\}+\frac{1}{{2}} \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]\end{aligned} \hspace{\stretch{1}}(3.74)

where we expect \boldsymbol{\sigma} \cdot \mathbf{L} \propto \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]. Alternately in components

\begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )&=\sigma_k x_k \sigma_j p_j \\ &=x_k p_k I + \sum_{j\ne k} \sigma_k \sigma_j x_k p_j \\ &=x_k p_k I + i \sum_m \epsilon_{kjm} \sigma_m x_k p_j \\ &=I (\mathbf{x} \cdot \mathbf{p}) + i (\boldsymbol{\sigma} \cdot \mathbf{L})\end{aligned}

Problem 9.

Statement.

Solution.

TODO.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »