# Peeter Joot's (OLD) Blog.

• ## Archives

 lidiodu on PHY450H1S. Relativistic Electr… lidiodu on PHY450H1S. Relativistic Electr… lidiodu on bivector form of Stokes t… prof dr mircea orasa… on Verifying the Helmholtz Green… loiosu on Verifying the Helmholtz Green…

• 305,364

# Archive for June, 2011

## Forcing DB2 client side connections to hang on one of the members.

Posted by peeterjoot on June 23, 2011

For test purposes I wanted to force an inter-member LOCK dependency. This turns out to be pretty easy. First I created a table with a couple of dummy rows

db2 create table x '(x int)'
db2 insert into x 'values(1)'
db2 insert into x 'values(2)'
db2 insert into x 'values(3)'
db2 commit


Then setup an uncommitted update to these rows

# member 1
db2 connect to testdb2
db2 +c update x set x = x + 1


Doing the same thing on another member hangs as expected:

# member 2
db2 connect to testdb2
db2 +c update x set x = x + 1


commit on the first member resolves the hang. Interestingly one can do a ‘select * from x’ without any trouble, since we allow concurrent read and update, but one gets the pre-commit values of the rows in the table.

## code that unfortunately compiles without warning

Posted by peeterjoot on June 22, 2011

I was amazed that this code compiles.

#include <stdlib.h>

void foo()
{
exit:

exit ;
}


In the actual function the exit which should have been a return was followed by a bunch of stuff that included a goto, and things got “very interesting”!

Unfortunately, the intel compiler doesn’t even warn about this one (gcc does, but we no longer use gcc for our intel targetted ports).

Posted in C/C++ development and debugging. | Tagged: , | 1 Comment »

## An unusual volatile required example.

Posted by peeterjoot on June 21, 2011

For code that is regularly subjected to invasive strip searches by the performance police it is not uncommon for the developers who own that code to try to avoid some of the use of mutual exclusion methods. One pattern that is routinely re-invented is taking a peek at something when the mutex is not held, then acquiring the mutex “for real” based on the state of the something. Example:

my v = pSharedData->myVariable ;

if ( v )
{

// myVariable always updated while the mutex is locked.  Get the current value, not
// necessarily the same as what was read above.
v = pSharedData->myVariable ;

// Now do something with it:
foo( v ) ;

}


This code will often not work. One hidden requirement is that myVariable must be volatile. If that’s not the case, then the compiler can use the value that was first read. If there is no “peek” of the variable outside of the lock hold scope, then things work out because the mutex acquision has unknown side effects and the compiler won’t read from that memory until after the function call is executed. Note that if you implement your own mutual exclusion code (which can be done inline) you’ll need something to force code ordering as well as dealing with the multiprocess or multithread mutual exclusion mechanism itself.

I saw an example of volatile required that is similar to this usual one, but pretty much completely inverted. Here’s what the code looked like, and is executed in a context where the mutex that protects the write accesses to myVariable has not yet been locked:

unsigned v = pSharedMem->myVariable ;
if ( v != 0 )
{
do {
v-- ;

if ( pSharedMem->someArray[ v ] == pp )
{
foo() ;

break ;
}
} while ( v ) ;
}


Without myVariable declared volatile, one compiler, even with optimization enabled, chose to generate code for this fragment that was as if two loads had been done, as in the following:

if ( pSharedMem->myVariable )
{
unsigned v = pSharedMem->myVariable ;

do {
v-- ;

if ( pSharedMem->someArray[ v ] == pp )
{
foo() ;

break ;
}
} while ( v ) ;
}


The lesson, as usual, was that there’s a big maintenance cost to attempting to avoid normal mutual exclusion mechanisms. This bit of mutex avoidance code had been carefully thought through, but once tried in production, it yielded this nice hidden platform specific, compiler optimization level specific run time bug. It was a suprising side effect that took the developers who owned the code a long time to track down.

This particular fragment of performance critical code was fixed by adding a volatile attribute to the myVariable declaration. Doing so means that the compiler is not free to re-load the variable if it has been read into a local and used only from there. It is curious that the compiler would choose to do so at all. There was no obvious reason, like a stack spill around a function call, for the compiler to make this code generation choice, but without the volatile it was permitted to do so.

Posted in C/C++ development and debugging. | Tagged: , , | 11 Comments »

## $TMPDIR environment variable. Posted by peeterjoot on June 20, 2011 /tmp filling up on Unix is a pain in the butt. When it happens everybody is effected, and it can destabilize the machine. On many of our work machines home is configured much bigger, than /tmp, so it’s generally advisable to not use /tmp itself. I’ll always include something like the following in my .profile (or .bash_profile) export TMPDIR=$HOME/tmp


(where I’ve run mkdir -p $HOME/tmp when creating my .profile initially). Many people don’t know that /tmp can be avoided, but most system commands (and many scripts that are written politely) will respect the$TMPDIR. It’s also easier to clean up your stuff since you don’t have to search /tmp/ for your stuff. When some of the $HOME’s are split onto different filesystems then systematic use of$TMPDIR can make it harder for one hog to bog down everybody.

I’d like to see the ramdisk /tmp/ used by default on Linux (we are using SLES11 for which this doesn’t appear to be a default), like it is on Solaris. Then every reboot things start off clean.

## On tensor product generators of the gamma matrices.

Posted by peeterjoot on June 20, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

In [1] he writes

\begin{aligned}\gamma^0 &=\begin{bmatrix}I & 0 \\ 0 & -I\end{bmatrix}=I \otimes \tau_3 \\ \gamma^i &=\begin{bmatrix}0 & \sigma^i \\ \sigma^i & 0\end{bmatrix}=\sigma^i \otimes i \tau_2 \\ \gamma^5 &=\begin{bmatrix}0 & I \\ I & 0\end{bmatrix}=I \otimes \tau_1\end{aligned}

The Pauli matrices $\sigma^i$ I had seen, but not the $\tau_i$ matrices, nor the $\otimes$ notation. Strangerep in physicsforums points out that the $\otimes$ is a Kronecker matrix product, a special kind of tensor product [2]. Let’s do the exersize of reverse engineering the $\tau$ matrices as suggested.

# Guts

Let’s start with $\gamma^5$. We want

\begin{aligned}\gamma^5 = I \otimes \tau_1 =\begin{bmatrix}I \tau_{11} & I \tau_{12} \\ I \tau_{21} & I \tau_{22} \\ \end{bmatrix}= \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.1)

By inspection we must have

\begin{aligned}\tau_1 = \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}= \sigma^1\end{aligned} \hspace{\stretch{1}}(2.2)

Thus $\tau_1 = \sigma^1$. How about $\tau_2$? For that matrix we have

\begin{aligned}\gamma^i = \sigma^i \otimes i \tau_2 =\begin{bmatrix}\sigma^i \tau_{11} & \sigma^i \tau_{12} \\ \sigma^i \tau_{21} & \sigma^i \tau_{22} \\ \end{bmatrix}= \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)

Again by inspection we must have

\begin{aligned}i \tau_2 = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.4)

so

\begin{aligned}\tau_2 = \begin{bmatrix}0 & -i \\ i & 0\end{bmatrix}= \sigma^2.\end{aligned} \hspace{\stretch{1}}(2.5)

This one is also just the Pauli matrix. For the last we have

\begin{aligned}\gamma^0 = I \otimes \tau_3 =\begin{bmatrix}I \tau_{11} & I \tau_{12} \\ I \tau_{21} & I \tau_{22} \\ \end{bmatrix}= \begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.6)

Our last tau matrix is thus

\begin{aligned}\tau_3 = \begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}= \sigma^3.\end{aligned} \hspace{\stretch{1}}(2.7)

Curious that there are two notations used in the same page for exactly the same thing? It appears that I wasn’t the only person confused about this.

# The bivector expansion

Zee writes his wedge products with the commutator, adding a complex factor

\begin{aligned}\sigma^{\mu\nu} = \frac{i}{2} \left[{\gamma^\mu},{\gamma^\nu}\right]\end{aligned} \hspace{\stretch{1}}(3.8)

Let’s try the direct product notation to expand $\sigma^{0 i}$ and $\sigma^{ij}$. That first is

\begin{aligned}\sigma^{0 i} &= \frac{i}{2} \left( \gamma^0 \gamma^i - \gamma^i \gamma^0 \right) \\ &= i \gamma^0 \gamma^i \\ &= i (I \otimes \tau_3)(\sigma^i \otimes i \tau_2) \\ &= i^2 \sigma^i \otimes \tau_3\tau_2 \\ &= - \sigma^i \otimes (-i \tau_1) \\ &= i \sigma^i \otimes \tau_1 \\ &= i \begin{bmatrix}0 & \sigma^i \\ \sigma^i & 0\end{bmatrix},\end{aligned}

which is what was expected. The second bivector, for $i=j$ is zero, and for $i\ne j$ is

\begin{aligned}\sigma^{i j} &= i \gamma^i \gamma^j \\ &= i (\sigma^i \otimes i \tau_2) (\sigma^j \otimes i \tau_2) \\ &= i^3 (\sigma^i \sigma^j) \otimes I \\ &= i^4 (\epsilon_{ijk} \sigma^k) \otimes I \\ &= \epsilon_{ijk} \begin{bmatrix}\sigma^k & 0 \\ 0 & \sigma^k\end{bmatrix}.\end{aligned}

# References

[1] A. Zee. Quantum field theory in a nutshell. Universities Press, 2005.

[2] Wikipedia. Tensor product — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 21-June-2011]. http://en.wikipedia.org/w/index.php?title=Tensor_product&oldid=418002023.

## Believed to be typos in Desai’s QM Text

Posted by peeterjoot on June 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

I have found some of the obvious stuff in my reading of \citep{desai2009quantum}. Prof.\ Vatche Deyirmenjian who teaches our PHY356 course has pointed out still more (and pointed out where I’d identified the wrong source for some typos).

# Chapter 1.

\begin{itemize}
\item Page 1. Prof.\ Deyirmenjian: The Hermitian, not complex conjugate, of ${\lvert {} \rangle}$ is ${\langle {} \rvert}$.
\item Page 5-6. Prof.\ Deyirmenjian: Change the ${*}$ in (1.26), (1.31), and (1.33) to a dagger.
\item Page 7. Text before (1.43). $\alpha$ instead of $a$ used.
\item Page 19. Equation (1.122). $\dagger$s omitted after first equality.
\end{itemize}

# Chapter 2.

\begin{itemize}
\item Page 40. Text before (2.137). Reference to equation (2.133) should be (2.135)
\item Page 53. Is the “Also show that” here correct? I get a different answer.
\end{itemize}

# Chapter 3.

\begin{itemize}
\item Page 61. Equation (3.51). $1/\hbar$ missing.
\item Page 62. Equation (3.58). Prof.\ Deyirmenjian: Remove the $U_I$ operators from Eq. (3.58)
\item Page 66. Equation (3.92). $-(d/dt {\langle {\alpha} \rvert}) {\lvert {\alpha} \rangle}$ should be $+{\lvert {\alpha} \rangle} d/dt {\langle {\alpha} \rvert}$.
\item Page 66. Equation (3.93). $H$ on wrong side of ${\langle {\alpha} \rvert}$
\item Page 74,76. Prof.\ Deyirmenjian: remove the extra brackets from Eq (4.9) and (4.21).
\item Page 79. Prof.\ Deyirmenjian: “The probability of finding this particle” should read “The probability density for this state at point x is”
\end{itemize}

# Chapter 4.

\begin{itemize}
\item Page 81. Equation (4.52). Should be $-2\alpha$ in the exponent.
\item Page 82. Equation (4.65). Prof.\ Deyirmenjian: a $1/\sqrt{2\pi}$ is missing before the integral. Note that without this (4.67) appears incorrect (off by a factor of $\sqrt{2\pi}$, but the error is really just in (4.65).
\item Page 82. Equation (4.67). Prof.\ Deyirmenjian: the negative sign should appear inside the large square brackets.

\item Page 83. Equation (4.74). A normalized wave function isn’t required for the discussion, but if that was intended, a $1/\sqrt{2\pi}$ factor is missing.
\item Page 83-84. Prof.\ Deyirmenjian: In (4.67) and (4.77), the derivative should be evaluated at $k=k_0$.
\item Page 86. Equation (4.99). Extra brace in the exponent.
\item Page 87. Equation (4.106). Extra brace in the exponent.
\item Page 89. Equation (4.124-4.130). Prof.\ Deyirmenjian: $C e^{\pm \sqrt{\mu}\phi}$ is not a solution to (4.122). This should be $Q(\phi) = C e^{i \sqrt{\mu} \phi}$ and (4.126) should be $\sqrt{\mu} = m$. This fixes the apparent error in sign in equations 4.129 and 4.130 which are correct as is.
\item Page 92. Equation (4.158). Prof.\ Deyirmenjian: should read $P_l(1) = 1$.
\item Page 93. Equation (4.169). conjugation missing for $Y_{lm}$. $Y_{l'm'}$ is missing prime on the $l$ index.
\item Page 95. Second line of text. Language choice? “We now implement”. perhaps utilize would be better?
\item Page 95. Text before (4.193). $i$ is in bold.
\item Page 96. Text before (4.196). $i$ is in bold.
\item Page 97. (4.205). $i$ is in bold.
\item Page 97. (4.207-209). $\mathbf{i}$, and $\mathbf{j}$s aren’t in bold like $\mathbf{k}$
\item Page 101. (4.245). The right side should read $Y_{l,m+1}$
\item Page 101. (4.239-240). The approach here is unclear. FIXME: incorporate lecture notes from class that did this using braket notation.
\item Page 102. (4.248-249). Commas missing to separate $l$, and $m\pm 1$ in the kets.
\end{itemize}

# Chapter 5.

\begin{itemize}
\item Page 109. (5.49). Remove bold font in right hand side state ${\lvert {\chi_{n+}} \rangle}$.
\item Page 113. (5.86). One $\sigma$ isn’t in bold.
\item Page 114. (5.100). $\chi$ is in bold.
\item Page 115. Text before (5.106). $\alpha$ in bold.
\item Page 118. Switch of notation in problem 5 for ensemble averages. $[S_i]$ used instead of $\left\langle{{S_i}}\right\rangle_{\text{av}}$.
\end{itemize}

# Chapter 6.

\begin{itemize}
\item Page 120. $\phi$ in bold. $A$ not in bold.
\item Page 123. (6.26). $1/i \hbar$ factor missing on RHS.
\item Page 124. Text before (6.37). You say canonical momenta $P_k$, but call these mechanical momenta on prev page.
\item Page 125. (6.41). Some $\psi$s are in bold.
\item Page 126. (6.49). There’s no mention that $\mathbf{B}$ is constant, leaving it unclear how the gauge condition and how the curl of $\mathbf{A}$ reproduces $\mathbf{B}$. This would also help clarify how you are able to write $\boldsymbol{\mu} \cdot \mathbf{B} = \mathbf{B} \cdot \boldsymbol{\mu}$.
\item Page 128. (6.65). $\boldsymbol{\mu} \cdot \mathbf{L}$ should be $\boldsymbol{\mu} \cdot \mathbf{B}$.
\item Page 129. (6.75). $\boldsymbol{\mu} \cdot \mathbf{L}$ should be $\boldsymbol{\mu} \cdot \mathbf{B}$.
\item Page 130. (6.80). integral looks like it should be $\int_{\mathbf{r}' = \mathbf{r}_0}^\mathbf{r} \frac{e}{c \hbar} \mathbf{A}(\mathbf{r}') \cdot d\mathbf{r}'$. ie: Clarify bounds, and add a factor of $c$ in the denominator which is required for the cancellation of (6.82).
\item Page 131. (6.81,6.86). Factors of $c$s should be with each of the $\hbar$s.
\item Page 131. Problem 1. bold missing on $\mathbf{E}$.
\end{itemize}

# Chapter 8.

\begin{itemize}
\item Page 143. (8.58). $\beta$ should be negated.
\item Page 159. (8.6.3). Two references to Chapter 2 should be Chapter 4.
\item Page 160. (8.199). Want $\hbar^2$ not $\hbar$ in expression for $k$.
\item Page 162. (Fig 8.9). Figure is backwards compared to text (a bump instead of a well).
\item Page 165. (8.235). Extra $R_l$ factor inside parens.
\end{itemize}

# Chapter 9.

\begin{itemize}
\item Page 174. (9.5). Have $\hbar/2m\omega$ instead of $\hbar m \omega/2$ in expression for $P$.
\item Page 181. (9.57). Factor of two missing. Want $\frac{\alpha}{2 \sqrt{\pi}}$.
\item Page 186. (Problem 10). Sequencing the text and problems is off. The green’s function technique isn’t introduced until chapter 10.
\end{itemize}

# Chapter 10.

\begin{itemize}
\item Page 189. (10.22). It would be nice to have a reference to the appendix (ie: 10.100) for the chapter so that this identity isn’t pulled out of a magic hat.
\item Page 192. (10.44, 10.45). $2 \alpha {\alpha^{*}}'$ should be $\alpha {\alpha^{*}}' + \alpha' \alpha^{*}$
\item Page 193. (10.51). Application (slowly, step by step explicitly) of 10.100 to expand the $e^{\frac{i}{\hbar}(p_0 X - x_0 P)}$ in the braket gives

\begin{aligned}{\langle {x} \rvert} e^{\frac{i}{\hbar}(p_0 X - x_0 P)} {\lvert {0} \rangle}&={\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X }e^{-\frac{i}{\hbar}x_0 P}e^{-\frac{i}{2\hbar}x_0 p_0 \left[{X},{P}\right]}{\lvert {0} \rangle} \\ &={\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X }e^{-\frac{i}{\hbar}x_0 P}e^{\frac{x_0 p_0}{2} }{\lvert {0} \rangle} \\ &=e^{\frac{x_0 p_0}{2} }{\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X} e^{-\frac{i}{\hbar}x_0 P}{\lvert {0} \rangle} \\ &=e^{\frac{x_0 p_0}{2} }\left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}e^{-\frac{i}{\hbar}p_0 X} {\lvert {x} \rangle}\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} }\left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}{\lvert {x} \rangle}e^{-\frac{i}{\hbar}p_0 x} \right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}{\lvert {x} \rangle}\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left(\left\langle{{0}} \vert {{x - x_0}}\right\rangle\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left\langle{{x - x_0}} \vert {{0}}\right\rangle \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \psi_0(x - x_0, 0)\end{aligned}

This is the same as (10.51) with the exception of a real scalar constant $e^{ x_0 p_0/2}$ multiplying the wave function. Because of this I think that (10.51) should be a proportionality statement, and not an equality as in

\begin{aligned}{\langle {x} \rvert} e^{\frac{i}{\hbar}(p_0 X - x_0 P)} {\lvert {0} \rangle} \propto e^{\frac{i}{\hbar}p_0 x} \psi_0(x - x_0, 0)\end{aligned}

\item Page 196. (text after 10.76). Looks like reference to Chapter 9, should be Chapter 9 problem 5.

\item Page 197. (text after 10.85). Reference to Chapter 1 should be Chapter 2.
\end{itemize}

# Chapter 26.

\begin{itemize}
\item Page 486. (26.60). $\mathbf{n} \times \mathbf{r} \cdot \boldsymbol{\nabla}$ ought to have braces and read $(\mathbf{n} \times \mathbf{r}) \cdot \boldsymbol{\nabla}$.
\item Page 496. (26.154). Remove $Y_{l'm}(\theta, \phi)$ term from the integral.
\end{itemize}

# Chapter 31.

\begin{itemize}
\item Page 562. (31.56). $T' = L T \tilde{M}$ is given for a mixed tensor representation. This is $T^\mu_{.\nu}$. The other mixed representation $T_{\mu}^{.\nu}$ transforms as $T' = M T \tilde{L}$.
\end{itemize}

# Chapter 32.

\begin{itemize}
\item Page 575. minor: $E t - \mathbf{p} . \mathbf{r}$ written instead of $E t - \mathbf{p} \cdot \mathbf{r}$
\item Page 576. minor: $\boldsymbol{\nabla} . \mathbf{j}$ instead of $\boldsymbol{\nabla} \cdot \mathbf{j}$
\item Page 577. (32.23). $\mathbf{j}$ accidentally includes the divergence.
\item Page 579. (32.35). Sign missing in exponential. Should be $e^{-i k \cdot x}$ not $e^{i k \cdot x}$.
\item Page 579. $\hbar \omega_k$ is the energy of the particle, not $\omega_k$. There’s also an $\hbar$ missing in the expression for $\omega_k$. That is $\omega_k = \sqrt{ c^2 \mathbf{k}^2 + m_0^2 c^4/\hbar^2}$.
\item Page 580. (32.40). The factor of $g$ presumed constant ought to be incorporated into $\chi$ if this is to be consistent with the (32.45) that follows.
\item Page 583. (32.70). sign error. negate integral.
\item Page 584. (32.74). sign error in the both the square root and subsequent approximation, which should be $p_4 = \pm \sqrt{ \mathbf{p}^2 + m^2 - i \epsilon} \approx \pm (\omega_p - i \epsilon')$. (I’ve added approximately equal for the second part since that wasn’t specified which I found confusing).
\item Page 584. (32.75-76). there are multiple sign errors in these equations which should be

\begin{aligned}\frac{1}{{p^2 - m^2 + i\epsilon}} &\approx \frac{1}{{(p_4 - \omega_p + i\epsilon')(p_4 + \omega_p - i\epsilon')}} \\ &\approx\frac{1}{{2 \omega_p}}\left(\frac{1}{{p_4 - \omega_p + i\epsilon'}}-\frac{1}{{p_4 + \omega_p - i\epsilon'}}\right)\end{aligned}

Note that an attempt to confirm (32.76) yields

\begin{aligned}\frac{1}{{p_4 - \omega_p + i\epsilon'}}-\frac{1}{{p_4 + \omega_p - i\epsilon'}}=\frac{2 \omega_p - 2 i \epsilon'}{ p^2 - m^2 + i \epsilon + \epsilon^2/4(\mathbf{p}^2 + m^2)}\end{aligned}

So we need approximations twice for the “equality”.

\item Page 584. (before 32.78). minor: bold script used for $\mathbf{p} \cdot \mathbf{r}$ on second like of the change of variables.
\item Page 585. (32.82, 32.83). minor: $p_n . x$ instead of $p_n \cdot x$.
\item Page 585. (32.82, 32.83). wrong normalization? wouldn’t we want $1/\sqrt{2 \omega_{p_n}}$.
\item Page 585. (32.84). notation switch? $0n$ index whereas $n0$ used above? What are the definitions of $\psi_{0n}$? that allow the integral to be converted to a sum?
\item Page 586. (32.88). minor: $i \mathbf{k} .$ instead of $i \mathbf{k} \cdot$
\item Page 586. (32.88). I calculate a negated $G(\mathbf{k})$ from (32.88). Guessing that (32.88), and (32.93 on pg 587) where intended to be negated like done earlier (for example in (32.57)).
\item Page 587. (32.93). minor: $i \mathbf{k} .$ instead of $i \mathbf{k} \cdot$
\item Page 588-589. (32.104-105). This substitution doesn’t appear to work?

\begin{aligned}&(\boldsymbol{\nabla}^2 - \mu^2)(\phi' e^{-\mu r}) \\ &= \frac{1}{{r^2}} \frac{\partial {}}{\partial {r}} \left( r^2 \frac{\partial {}}{\partial {r}} \left( \phi' e^{-\mu r} \right) \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{1}{{r^2}} \frac{\partial {}}{\partial {r}} \left( r^2 \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{\partial {}}{\partial {r}} \left( \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \right) + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \left( \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} - {\mu \phi' e^{-\mu r}} \right) \right) + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) - {\mu^2 \phi' e^{-\mu r}} \\ &=\frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \\ &=\left( \frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} + 2 \frac{1}{{r}} \frac{\partial {\phi'}}{\partial {r}} \right) e^{-\mu r} -2 \mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -2 \mu \frac{1}{{r}} \phi' e^{-\mu r} \\ &=(\boldsymbol{\nabla}^2 \phi') e^{-\mu r}- 2 \mu \left( \frac{\partial {\phi'}}{\partial {r}} + \frac{1}{{r}} \phi' \right) e^{-\mu r} \\ &=(\boldsymbol{\nabla}^2 \phi') e^{-\mu r}- 2 \frac{\mu}{r} \left( \frac{\partial {(r \phi')}}{\partial {r}} \right) e^{-\mu r} \\ \end{aligned}

There’s an extra term here that doesn’t show up in (32.105) with this transformation. Can that be argued away somehow?

\item Page 589. (32.107). minor: $i \mathbf{k}_f .$ instead of $i \mathbf{k}_f \cdot$
\item Page 589. (32.108). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\item Page 590. (after 32.118). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\item Page 591. (32.124). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\end{itemize}

NOTE: up to commit efc6cd3bfee91f43949016f8ba851de273e4fa8d of these notes emailed to Desai May 10, 2011.

# Chapter 33.

\begin{itemize}
\item Page 597. (before 33.5). Last paragraph references chapter 12. Chapter 32 meant here? (or chapter 4).
\end{itemize}

# Chapter 35.

\begin{itemize}
\item Page 635. (35.46). minor: Bold on gamma.
\item Page 636. (35.50). minor: Bold on gamma.
\item Page 636-638. (35.51-35.58). minor: incomplete notation switch. This chapter uses $E_p$ instead of ${\left\lvert{E}\right\rvert}$, but many formulas on these pages continue to use the ${\left\lvert{E}\right\rvert}$ notation from chapter 33, even mixing the two in some places.
\item Page 643. (35.107). $e_\mu^{\nu}$ should be $e_\mu^{.\nu}$. There are also some missing positional indicators in (35.105) and (35.106).
\item Page 643. (35.114). $\bar{\psi}'(x')\psi(x')$ should be $\bar{\psi}'(x')\psi'(x')$.
\item Page 645. (35.134). Wrong sign on $\gamma_5$. It should be $-i \gamma^1 \gamma^2 \gamma^3 \gamma^4$.
\end{itemize}

# Chapter 36.

\begin{itemize}
\item Page 648. (36.16-17). $\hbar$s should be omitted for consistency.
\item Page 648. (36.16-17). It appears that the $- e \boldsymbol{\sigma}'$ should be $+ e \boldsymbol{\sigma}'$.
\end{itemize}

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , | Comments Off on Believed to be typos in Desai’s QM Text

## Just Energy. Attempts to silence bloggers with legal action.

Posted by peeterjoot on June 15, 2011

I’d posted some of my thoughts about Just Energy Canada, after being thoroughly unimpressed with what seemed to me extremely unethical and coercive tactics.

You didnt have to pay them if you moved to a new location where the company doesn’t service or you moved to a location where you are NOT responsible for paying your utility (your name is not on the utility bills).

They don’t tell you this and send you straight to collection because it’s easy $450. Same thing happened with my relative, they sold their home, they moved to a small apartment. They no longer responsible for their utility, their landlord was. They paid$600 to Universal Energy (now Just Energy) for cancellation. That’s why the company is doing this, easy money with uninformed customers.

I would send a complaint letter to the local government regulator, tell them your story. If you don’t complaint, no one would know, the government regulator can’t keep track of the trend if no one complaint and no one will know what kind of company Just Energy really is.

Read more about how to cancel Just Energy, and you don’t have to pay a cent. Also here is a list of local government energy regulators to file your complaint, please file your complaint so your friends, family and neighbors do not have to waste time with this garbage.

My response to this was:

Thanks for the tips (I can’t take advantage of the non-proximity tip since I still live close to my old house and continue to be serviced by Enbridge gas).

I will definitely send a complaint letter to the Ontario Energy Board. I’ve informed just energy that I intend to do so.

Imagine an honest version of the Just Energy door to door salesman:

– Sign up with us, and you’ll never receive any information letting you know if you are actually saving money, or even breaking even.
– Should you move, despite your old gas bill being paid in full, and the account closed, we will then send collection agencies after you for exorbitant contract breaking fees.
– We won’t attempt to supply you with any information saying that there is a contractual obligation to continue using our company as a distributor. We prefer to send collection agencies first.
– Our company is inept. Should you have a contract dispute, we will take a month and a half to mail you a copy of the contract so that you can verify that there was in fact a legally binding agreement.
– We also cannot email documents in less than two weeks, and have to be called repeatedly to attempt to get us to do so. We will also initially deny that we can even email documentation. We will loose requests to surface mail it, so you’ll have to call us many times, and spend hours on the phone with us.
– Should you separate with your spouse, we will add to your grief. We don’t know anything about compassion, and at a time when you already have enough stress, we consider it our job to add to it. Instead of cancelling a contract associated with a pre-marital home we require one of the former spousal partners to assume contract associated with a house that neither party even lives in anymore.
– Should you actually insist on terminating the distribution agreement, we will then let you know that a variable rate agreement is possible. Of course we prefer to price gouge at inflated fixed rates, so this will not be advertised or made known, except as a last resort to keep collecting money from you.

He also wrote:

I was browsing your homepage and all I can see are calculations and equations. Then all of sudden there is a single post on top about Just Energy. You really hate these guys and I can’t blame you, there are many good reasons to hate Just Energy.

Yup, if they start to be upfront with homeowners, they won’t be where they are today, 2 billion cooperation. They understand not a lot of folks know about regulation on gas and electricity and that is what they are taking advantage of. Misinformed people and you’re absolutely right, Just Energy doesn’t care about their customers. Just last year they changed their name to Just Energy, before that it was Ontario Energy Savings, Alberta Energy Savings and down the states it was US Energy Savings. Ontario Energy Board? Ontario Energy Savings? Which one is a private company and which one is a real government body? Even the name is a scam, and who approved this? Hmmm… Their former names imply some kind of government affiliation and the name Energy Savings imply savings, but that’s not true. The only time Just Energy will contact you is through the (misleading) door-to-door sale about “energy savings” and then the follow up call to reaffirm over the phone after 10 days. After that you won’t ever hear from them again. No thank you package for joining, no mail telling you any information and when you move go talk to their collection. Gee I wonder why people hate them. Just look at their own website, I think its Justenergy.com and you won’t find any information about the rates and different prices comparing their rates vs the local utility. After all, gas and electricity are their main products. The company argues it is so hard for them because gas and electricity prices change every day. That’s true but why are there other providers selling the exact same thing with their rates on their websit? Oh I can go on and on … To which I said Changing names quickly is another form of misdirection. The company probably builds up negative feedback and routinely changes names to help minimize the damage. I wouldn’t say that I hate them, but I do hate having been dumb enough to have been exploited by them. Having had that happen, I’ll not hesitate to make my story available. Perhaps I can help at least one other person avoid the same thing. As you pointed out my single rogue post on this company along with the rest of my math and physics and computer programming obscurity will likely go unnoticed. My off the cuff remark about the company changing names periodically is speculation. I don’t know if this has actually happened. Posted in Incoherent ramblings | Tagged: | Leave a Comment » ## Some thoughts and questions about new plans for CIA actions in Yemen. Posted by peeterjoot on June 15, 2011 The undeclared US war on Yemen started with the pretense of liberating the country from a tyrannical dictator. Deja-vu anybody? Sounds like Iraq all over again. My immediate question upon hearing this, is wondering who has been financing this dictator and supplying weapons to his armies and police forces. Given the geography I’m sure oil revenue is part of it, but I’d not be the least surprised if US provided weapons and weapons training had also been involved, nor would I be surprised if US money was also tossed into the pot. It also seems to me that the military-industrial-political complex of the US appears to always drive at least one overt active war with at least one country at any given time. What was the longest interval in the last 60 years that the US was not actively running some sort of war campaign? With congressional pressure to end the Yemen bombings that have gone on longer than the 60 day maximum time interval allowed by US law without an explicit declaration of war (and congressional financing thereof) we now hear that the CIA will continue the mantle of Yemen bombings under the guise of Al-Queda and counter terrorist efforts. This seems to me a direct confirmation that the initial bombing in Yemen under the guise of eliminating a dictator, was nothing more than an excuse. With the Afgan and Iraq wars both unsuccessful and unpopular (unless military spending is the metric for success), Yemen’s leadership provided a good excuse for another war. However, since it is an undeclared war, it looks like it now has to go covert to be maintained. It amazes me how the political and media puppets of the military industry can convince enough people that violent action against believed terrorists has any benefit. It certainly has benefit to weapons manufacturers and other war profiteers, since this violent action creates both the both the US terrorism and the stereotypical terrorism that they require to continue their business. EDIT: foot in mouth I need a serious geography lesson. I can’t keep the US wars straight in my mind, and appear to have gotten Yemen mixed up with Libya. Yemen sounds like it’s just a covert war right now, whereas Libya is the one that is being waged without congressional approval. To add to the confusion is the talk that Syria is the next US war. Posted in Incoherent ramblings | Tagged: , , | 1 Comment » ## Dirac spinor notes. Posted by peeterjoot on June 11, 2011 [Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)] # Motivation. I was having algebraic trouble verifying orthonormality relationships for spinor solutions to the Dirac free particle equation, and initially started preparing these notes to post a question to physicsforums. However, in the process of doing so, I spotted my error. A side effect of making these notes is that I got a nice summary of some of the relationships, and it was a good starting point for some personal notes expanding on the content of these chapters. # Context for the original question. In Desai’s QM book [1], the non-covariant form of the free particle equation is developed as \begin{aligned}\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}u= 0,\end{aligned} \hspace{\stretch{1}}(2.1) where each block in the matrix above is two by two. Recall that \begin{subequations} \begin{aligned}\sigma_1 &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\ \sigma_2 &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\ \sigma_3 &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.2a) \end{subequations} so \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} =\begin{bmatrix}p_z & p_x - i p_y \\ p_x + i p_y & - p_z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.3) For spin up ${\lvert {+} \rangle}$ and spin down ${\lvert {-} \rangle}$ states, the positive energy solutions $E = {\left\lvert{E}\right\rvert} = \sqrt{\mathbf{p}^2 + m^2}$ are found to be \begin{aligned}u^{\pm}(\mathbf{p}) =\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}{\lvert {\pm} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {\pm} \rangle}\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.4) and the negative energy states associated with $E = -{\left\lvert{E}\right\rvert} = -\sqrt{\mathbf{p}^2 + m^2}$ are found to be \begin{aligned}v^{\pm}(\mathbf{p}) =\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}-\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {\pm} \rangle} \\ {\lvert {\pm} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.5) The z-axis spin up state ${\lvert {+} \rangle} = (1, 0)$ and spin down state ${\lvert {-} \rangle} = (0, 1)$ are also used to find one specific set of states for the positive energy solutions \begin{subequations} \begin{aligned}u^{+}(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}1 \\ 0 \\ \frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \frac{p_x + i p_y}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix} \\ u^{-}(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}0 \\ 1 \\ \frac{p_x - i p_y}{{\left\lvert{E}\right\rvert} + m} \\ -\frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.6a) \end{subequations} and negative energy solutions \begin{subequations} \begin{aligned}v^{+}(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}-\frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ -\frac{p_x + i p_y}{{\left\lvert{E}\right\rvert} + m} \\ 1 \\ 0 \\ \end{bmatrix} \\ v^{-}(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}-\frac{p_x - i p_y}{{\left\lvert{E}\right\rvert} + m} \\ \frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ 0 \\ 1 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7a) \end{subequations} (the book uses $u^{\pm}$ for both the negative energy states, but I’ve used $v^{\pm}$ here for the negative states for consistency with the covariant equation solutions). Later a complete set of states $u_r(\mathbf{p}), v_r(\mathbf{p})$ are identified as solutions to the covariant Dirac equations $(\gamma \cdot p -m)u = 0$, $(\gamma \cdot p + m) v = 0$, where $p^\mu = (\mathbf{p}, {\left\lvert{E}\right\rvert})$ as follows \begin{subequations} \begin{aligned}u_1(\mathbf{p}) &= u^{+}(\mathbf{p}) \\ u_2(\mathbf{p}) &= u^{-}(\mathbf{p}) \\ v_1(\mathbf{p}) &= v^{+}(-\mathbf{p}) \\ v_2(\mathbf{p}) &= v^{-}(-\mathbf{p}),\end{aligned} \hspace{\stretch{1}}(2.8a) \end{subequations} Note very carefully the sign change above. This is important, since without that we do not have a zero inner product between all $u_r$ and $v_s$ states. Spelled out explicitly, these states for the z-axis spin up case are \begin{subequations} \begin{aligned}u_1(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}1 \\ 0 \\ \frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \frac{p_x + i p_y}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix} \\ u_2(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}0 \\ 1 \\ \frac{p_x - i p_y}{{\left\lvert{E}\right\rvert} + m} \\ -\frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix} \\ v_1(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}\frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \frac{p_x + i p_y}{{\left\lvert{E}\right\rvert} + m} \\ 1 \\ 0 \\ \end{bmatrix} \\ v_2(\mathbf{p}) &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}\frac{p_x - i p_y}{{\left\lvert{E}\right\rvert} + m} \\ -\frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ 0 \\ 1 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.9a) \end{subequations} In order to construct a covariant current conservation relationship a quantity, the Dirac adjoint, was defined as \begin{aligned}\bar{\psi} = \psi^\dagger \gamma^4,\end{aligned} \hspace{\stretch{1}}(2.10) where \begin{aligned}\gamma^4 = \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.11) This Dirac adjoint can be used to form an inner product of the form \begin{aligned}\bar{\psi}\psi\end{aligned} \hspace{\stretch{1}}(2.12) It’s claimed in the text that we have $\bar{u_r} u_s = \delta_{rs}$, $\bar{v_r} v_s = \delta_{rs}$, and $\bar{u_r} v_s = 0$. Let’s verify all these relationships. # Some checks. ## Verify the non-covariant solutions. A non-relativistic approximation argument was used to determine the solutions 2.6a, but we can verify that these hold generally by substitution. For example, for the positive energy z-axis spin up state we have \begin{aligned}&\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}u^{+}(\mathbf{p}) \\ &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}E - m & 0 & -p_z & -p_x + i p_y \\ 0 & E - m & -p_x - i p_y & p_z \\ -p_z & -p_x + i p_y & E + m & 0 \\ -p_x - i p_y & p_z & 0 & E + m \end{bmatrix}\begin{bmatrix}1 \\ 0 \\ \frac{p_z}{{\left\lvert{E}\right\rvert} + m} \\ \frac{p_x + i p_y}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix} \\ &\sim \begin{bmatrix}E^2 - m^2 - p_x^2 - p_y^2 - p_z^2 \\ -(p_x + i p_y) p_z + p_z (p_x + i p_y) \\ - p_z( E + m ) + p_z( E + m ) \\ -(p_x + i p_y) (E + m) + (E + m)(p_x + i p_y)\end{bmatrix} \\ &= 0.\end{aligned} Here the relationship between the free particle’s energy and momentum $E^2 - m^2 - \mathbf{p}^2 = 0$ has been used, so we have a zero as desired, and no non-relativistic approximations are required. We can show this generally too, without requiring the specifics of the z-axis spin up or down solutions. This is actually even easier. For the positive energy solutions 2.4 we have \begin{aligned}\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}u&\sim\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}\begin{bmatrix}(E + m) {\lvert {\pm} \rangle} \\ (\boldsymbol{\sigma} \cdot \mathbf{p}) {\lvert {\pm} \rangle}\end{bmatrix} \\ &=\begin{bmatrix}(E^2 - m^2 - (\boldsymbol{\sigma} \cdot \mathbf{p})^2) {\lvert {\pm} \rangle} \\ 0 {\lvert {\pm} \rangle}\end{bmatrix} \\ &=\begin{bmatrix}(E^2 - m^2 - \mathbf{p}^2) {\lvert {\pm} \rangle} \\ 0 {\lvert {\pm} \rangle}\end{bmatrix} \\ &=0,\end{aligned} where the identity $(\boldsymbol{\sigma} \cdot \mathbf{p})^2 = \mathbf{p}^2$ has been used. For the negative energy solutions 2.5 we have \begin{aligned}\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}u&\sim\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}\begin{bmatrix}-(\boldsymbol{\sigma} \cdot \mathbf{p}) {\lvert {\pm} \rangle} \\ (-E + m) {\lvert {\pm} \rangle} \\ \end{bmatrix} \\ &=\begin{bmatrix}0 {\lvert {\pm} \rangle} \\ (-E^2 + m^2 + (\boldsymbol{\sigma} \cdot \mathbf{p})^2) {\lvert {\pm} \rangle} \\ \end{bmatrix} \\ &=0.\end{aligned} ## Is there something special about the z-axis orientation? Why was the z-axis spin orientation picked? It doesn’t seem to me that there would be any reason for this. For y-axis spin, recall that our eigenstates are \begin{aligned}{\lvert {\pm} \rangle}=\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ \pm i\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.13) Our positive energy states should therefore be \begin{aligned}u^{\pm}(\mathbf{p}) &\sim\begin{bmatrix}\begin{bmatrix}1 \\ \pm i\end{bmatrix} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \begin{bmatrix}1 \\ \pm i\end{bmatrix} \end{bmatrix} \\ &=\begin{bmatrix}1 \\ \pm i \\ \frac{1}{{\left\lvert{E}\right\rvert} + m} \begin{bmatrix}p_z & p_x - i p_y \\ p_x + i p_y & - p_z\end{bmatrix}\begin{bmatrix}1 \\ \pm i\end{bmatrix} \end{bmatrix} \\ &\sim\begin{bmatrix}E + m \\ \pm i (E + m) \\ p_z \pm i p_x \pm p_y \\ p_x + i p_y \mp p_z \end{bmatrix}\end{aligned} It is straightforward to verify that these are solutions. We find for example that \begin{aligned}\begin{bmatrix}E - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ - \boldsymbol{\sigma} \cdot \mathbf{p} & E + m\end{bmatrix}u^{+}\sim \begin{bmatrix}E^2 - m^2 - \mathbf{p}^2 \\ i (E^2 - m^2 - \mathbf{p}^2 ) \\ 0 \\ 0\end{bmatrix}= 0,\end{aligned} \hspace{\stretch{1}}(3.14) as expected. What’s the general solution? For \begin{aligned}\mathbf{n} = \begin{bmatrix}\sin\theta \cos\phi \\ \sin\theta \sin\phi \\ \cos\theta \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.15) we find \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} =\begin{bmatrix}\cos\theta & \sin\theta e^{-i\phi} \\ \sin\theta e^{i\phi} & -\cos\theta\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.16) with eigenstates \begin{subequations} \begin{aligned}{\lvert {+} \rangle} &=\begin{bmatrix}\cos(\theta/2) e^{-i\phi/2} \\ \sin(\theta/2) e^{i\phi/2} \\ \end{bmatrix} \\ {\lvert {-} \rangle} &=\begin{bmatrix}-\sin(\theta/2) e^{-i\phi/2} \\ \cos(\theta/2) e^{i\phi/2} \\ \end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.17a) \end{subequations} Should we wish to consider an arbitrarily oriented spin, expressing $\mathbf{p}$ in spherical coordinates also makes sense \begin{aligned}\mathbf{p} = {\left\lvert{\mathbf{p}}\right\rvert}\begin{bmatrix}\sin\alpha \cos\beta \\ \sin\alpha \sin\beta \\ \cos\alpha \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.18) and we find (with $S$ and $C$ for $\sin$ and $\cos$ respectively) \begin{subequations} \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} {\lvert {+} \rangle}&={\left\lvert{\mathbf{p}}\right\rvert}\begin{bmatrix} C_\alpha C_{\theta/2} e^{-i \phi} + S_\alpha S_{\theta/2} e^{-i \beta} \\ S_\alpha C_{\theta/2} e^{i (\beta - \phi)} - C_\alpha S_{\theta/2} \end{bmatrix} \\ \boldsymbol{\sigma} \cdot \mathbf{p} {\lvert {-} \rangle}&={\left\lvert{\mathbf{p}}\right\rvert}\begin{bmatrix}- C_\alpha S_{\theta/2} e^{-i \phi} + S_\alpha C_{\theta/2} e^{-i \beta} \\ - S_\alpha S_{\theta/2} e^{i (\beta - \phi)} - C_\alpha C_{\theta/2} \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.19a) \end{subequations} Substitution back into 2.4, and 2.5 is then easy. Expressing these with the angles expressed as sums and differences is strongly suggested. With $\Delta = (\beta - \phi)/2$, and $\delta = (\beta + \phi)/2$ this gives \begin{subequations} \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} {\lvert {+} \rangle}&={\left\lvert{\mathbf{p}}\right\rvert}\begin{bmatrix}e^{-i\delta}\left(C_{\alpha - \theta/2} C_\Delta + i C_{\alpha + \theta/2} S_\Delta \right) \\ e^{i \Delta}\left(S_{\alpha - \theta/2} C_\Delta + i S_{\alpha + \theta/2} S_\Delta \right) \\ \end{bmatrix} \\ \boldsymbol{\sigma} \cdot \mathbf{p} {\lvert {-} \rangle}&={\left\lvert{\mathbf{p}}\right\rvert}\begin{bmatrix}e^{-i\delta}\left(S_{\alpha - \theta/2} C_\Delta - i S_{\alpha + \theta/2} S_\Delta \right) \\ e^{i \Delta}\left(-C_{\alpha - \theta/2} C_\Delta + i C_{\alpha + \theta/2} S_\Delta \right) \\ \end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.20a) \end{subequations} This is probably about as tidy as things can be made for the general case. ## Expanding the current equation. With \begin{aligned}\mathbf{j} = \psi^\dagger \boldsymbol{\alpha} \psi = \begin{bmatrix}u_1^\dagger & u_2^\dagger\end{bmatrix}\begin{bmatrix}0 & \boldsymbol{\sigma} \\ \boldsymbol{\sigma} & 0 \end{bmatrix}\begin{bmatrix}u_1 \\ u_2\end{bmatrix}= u_1^\dagger \boldsymbol{\sigma} u_2 + u_2^\dagger \boldsymbol{\sigma} u_1\end{aligned} \hspace{\stretch{1}}(3.21) We can expand the current for a general spin up or spin down state ${\lvert {r} \rangle}$ with respect to either the positive energy or negative energy solutions. Those (normalized) solutions are respectively \begin{aligned}\psi_{+} &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}{\lvert {r} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert}+ m} {\lvert {r} \rangle} \\ \end{bmatrix} \\ \psi_{-} &=\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}-\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert}+ m} {\lvert {r} \rangle} \\ {\lvert {r} \rangle} \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.22) For the $i$th component of the positive energy solution current we have \begin{aligned}\psi_{+}^\dagger \boldsymbol{\alpha} \psi_{+}&=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m}\end{bmatrix}\begin{bmatrix}\sigma_i \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle} \\ \sigma_i {\lvert {r} \rangle}\end{bmatrix} \\ &=\frac{1}{2m}{\langle {r} \rvert} \left(\sigma_i (\boldsymbol{\sigma} \cdot \mathbf{p})+(\boldsymbol{\sigma} \cdot \mathbf{p}) \sigma_i \right) {\lvert {r} \rangle}\end{aligned} Similarly for a negative energy solution we have \begin{aligned}\psi_{-}^\dagger \boldsymbol{\alpha} \psi_{-}&=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}-{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \rvert}\end{bmatrix}\begin{bmatrix}\sigma_i {\lvert {r} \rangle} \\ -\sigma_i \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle}\end{bmatrix} \\ &=\frac{1}{2m}{\langle {r} \rvert} \left(-\sigma_i (\boldsymbol{\sigma} \cdot \mathbf{p})-(\boldsymbol{\sigma} \cdot \mathbf{p}) \sigma_i \right) {\lvert {r} \rangle}\end{aligned} We can expand the inner term of both easily \begin{aligned}\sigma_i (\boldsymbol{\sigma} \cdot \mathbf{p}) + (\boldsymbol{\sigma} \cdot \mathbf{p}) \sigma_i =2 \sigma_i^2 p_i + \sum_{i \ne j} ({\sigma_i \sigma_j + \sigma_j \sigma_i}) p^j\end{aligned} \hspace{\stretch{1}}(3.24) so that we have for the positive and negative energy solutions currents of \begin{aligned}j_i &= {\langle {r} \rvert} \frac{p_i}{m} {\lvert {r} \rangle} \\ j_i &= -{\langle {r} \rvert} \frac{p_i}{m} {\lvert {r} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.25) This finds the velocity dependence noted in section 33.4, but does not require taking any specific spin orientation, nor any specific momentum direction. ## Unpacking the covariant equation. Pre-multiplication of the covariant Dirac equation by $\gamma^4$ should provide a space-time split of the Dirac equation. Let’s verify this \begin{aligned}\gamma^4 (\gamma \cdot p - m)&=\gamma^4 (\gamma_\mu p^\mu - m) \\ &=E\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix} + \gamma^4 \gamma_a p^a - m \begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix},\end{aligned} but \begin{aligned}\gamma^4 \gamma_a =\begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix}\begin{bmatrix}0 & -\sigma_a \\ \sigma_a & 0 \end{bmatrix}=\begin{bmatrix}0 & -\sigma_a \\ -\sigma_a & 0 \end{bmatrix}\end{aligned} \begin{aligned}\gamma^4 (\gamma \cdot p - m)=E\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix} - \begin{bmatrix}0 & \sigma_a \\ \sigma_a & 0 \end{bmatrix}p^a - m \begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix} =\begin{bmatrix}E - m & - \sigma \cdot \mathbf{p} \\ - \sigma \cdot \mathbf{p} & E + m\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.27) This recovers 2.1 as expected. ## Two by two form for the covariant equations. If we put the covariant Dirac equations in two by two matrix form we get \begin{aligned}0&= (\gamma \cdot p - m ) u \\ &= \left({\left\lvert{E}\right\rvert} \begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix}+ \begin{bmatrix}0 & - \sigma_a \\ \sigma_a & 0\end{bmatrix}p^a- m\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}\right) u \\ &=\begin{bmatrix}{\left\lvert{E}\right\rvert} - m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ \boldsymbol{\sigma} \cdot \mathbf{p} & -{\left\lvert{E}\right\rvert} - m\end{bmatrix} u\end{aligned} and \begin{aligned}0 &= (\gamma \cdot p + m ) v \\ &= \left({\left\lvert{E}\right\rvert} \begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix}+ \begin{bmatrix}0 & - \sigma_a \\ \sigma_a & 0\end{bmatrix}p^a+ m\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}\right) v \\ &=\begin{bmatrix}{\left\lvert{E}\right\rvert} + m & - \boldsymbol{\sigma} \cdot \mathbf{p} \\ \boldsymbol{\sigma} \cdot \mathbf{p} & -{\left\lvert{E}\right\rvert} + m\end{bmatrix} v\end{aligned} This form makes it easy to verify that our solutions are \begin{aligned}u_r(\mathbf{p}) =\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}{\lvert {r} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle}\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.28) and \begin{aligned}v_r(\mathbf{p}) =\sqrt{\frac{{\left\lvert{E}\right\rvert} + m}{2m}}\begin{bmatrix}\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle} \\ {\lvert {r} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.29) It’s curious to consider these part of a basis for a single equation. I suppose that all together they are actually eigenstates of the equation \begin{aligned}(\gamma \cdot p + m) (\gamma \cdot p - m) u = ((\gamma \cdot p)^2 - m^2) u = 0,\end{aligned} \hspace{\stretch{1}}(3.30) or \begin{aligned}(\gamma \cdot p - m) (\gamma \cdot p + m) v = ((\gamma \cdot p)^2 - m^2) v = 0,\end{aligned} \hspace{\stretch{1}}(3.31) which have the form of the Klein-Gordan equation. ## Orthonormality. Orthonormality for the $u$ vectors is easy to show, and we can do so without requiring any specific spin orientation \begin{aligned}\bar{u}_r u_s &= \frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \end{bmatrix}\gamma^4\begin{bmatrix}{\lvert {s} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle}\end{bmatrix} \\ &=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} &-{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \end{bmatrix}\begin{bmatrix}{\lvert {s} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle}\end{bmatrix} \\ &=\frac{1}{2m({\left\lvert{E}\right\rvert} + m)}\left\langle{{r}} \vert {{s}}\right\rangle \left( E^2 + m^2 + 2 {\left\lvert{E}\right\rvert} m - \mathbf{p}^2 \right) \\ &=\left\langle{{r}} \vert {{s}}\right\rangle.\end{aligned} It’s also easy for $v$ vectors \begin{aligned}\bar{v}_r v_s &= \frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \rvert} \end{bmatrix}\gamma^4\begin{bmatrix}\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle} \\ {\lvert {s} \rangle} \end{bmatrix} \\ &=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} &-{\langle {r} \rvert} \end{bmatrix}\begin{bmatrix}\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle} \\ {\lvert {s} \rangle} \end{bmatrix} \\ &=-\frac{1}{2m({\left\lvert{E}\right\rvert} + m)}\left\langle{{r}} \vert {{s}}\right\rangle \left( E^2 + m^2 + 2 {\left\lvert{E}\right\rvert} m - \mathbf{p}^2 \right) \\ &=-\left\langle{{r}} \vert {{s}}\right\rangle.\end{aligned} For the cross terms we have \begin{aligned}\bar{u}_r v_s &= \frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \end{bmatrix}\gamma^4\begin{bmatrix}\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle} \\ {\lvert {s} \rangle} \end{bmatrix} \\ &= \frac{{\left\lvert{E}\right\rvert} + m}{2m}\begin{bmatrix}{\langle {r} \rvert} &-{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \end{bmatrix}\begin{bmatrix}\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {s} \rangle} \\ {\lvert {s} \rangle} \end{bmatrix} \\ &=\frac{1}{2m}{\langle {r} \rvert} ( \boldsymbol{\sigma} \cdot \mathbf{p} - \boldsymbol{\sigma} \cdot \mathbf{p} ) {\lvert {s} \rangle} \\ &= 0\end{aligned} ## Resolution of identity. It’s claimed that an identity representation is \begin{aligned}\mathbf{1} = \sum_r u_r \bar{u}_r - v_r \bar{v}_r\end{aligned} \hspace{\stretch{1}}(3.32) This makes some sense, but we can see systematically why we have this negative sign. Suppose that we have a basis ${\lvert {a_i} \rangle}$ for which we have $\left\langle{{a_i}} \vert {{a_j}}\right\rangle = \pm \delta_{ij}$ (rather than the strict orthonormality condition $\left\langle{{a_i}} \vert {{a_j}}\right\rangle = \delta_{ij}$). Consider the calculation of the Fourier coefficients of a state \begin{aligned}{\lvert {a} \rangle} = \alpha_i {\lvert {a_i} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.33) We have \begin{aligned}\left\langle{{a_j}} \vert {{a}}\right\rangle = \alpha_i \left\langle{{a_j}} \vert {{a_i}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(3.34) For $i \ne j$ $\left\langle{{a_i}} \vert {{a_j}}\right\rangle = 0$, so that the coefficient is \begin{aligned}\alpha_j =\frac{\left\langle{{a_j}} \vert {{a}}\right\rangle}{\left\langle{{a_j}} \vert {{a_j}}\right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.35) The coordinate representation of this state vector with respect to this basis is thus \begin{aligned}{\lvert {a} \rangle} = \sum_i \left( \frac{\left\langle{{a_i}} \vert {{a}}\right\rangle}{\left\langle{{a_i}} \vert {{a_i}}\right\rangle} \right){\lvert {a_i} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.36) Shuffling things around, employing the somewhat abusive seeming Dirac ket-bra operator notation, we find the general identity operation takes the form \begin{aligned}{\lvert {a} \rangle} = \left( \frac{{\lvert {a_i} \rangle} {\langle {a_i} \rvert} }{\left\langle{{a_i}} \vert {{a_i}}\right\rangle} \right) {\lvert {a} \rangle},\end{aligned} \hspace{\stretch{1}}(3.37) so that the identity itself has the form \begin{aligned}\mathbf{1} = \frac{{\lvert {a_i} \rangle} {\langle {a_i} \rvert} }{\left\langle{{a_i}} \vert {{a_i}}\right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.38) This is the sum of all the ket-bras for which the braket is one, minus the sum of all the ket-bras for which the braket is negative, showing that the form of the claimed identity is justified. We can also verify this directly by computation, and find \begin{aligned}\sum_r u_r \bar{u}_r &=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\sum_r \begin{bmatrix}{\lvert {r} \rangle} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p} {\lvert {r} \rangle}}{{\left\lvert{E}\right\rvert} + m}\end{bmatrix}\begin{bmatrix}{\langle {r} \rvert} &-\frac{{\langle {r} \rvert} \boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m}\end{bmatrix} \\ &=\frac{{\left\lvert{E}\right\rvert} + m}{2m}\sum_r \begin{bmatrix}{\lvert {r} \rangle}{\langle {r} \rvert} & -{\lvert {r} \rangle}{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \\ \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle}{\langle {r} \rvert} &-\frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} {\lvert {r} \rangle}{\langle {r} \rvert} \frac{\boldsymbol{\sigma} \cdot \mathbf{p}}{{\left\lvert{E}\right\rvert} + m} \\ \end{bmatrix}\end{aligned} We can pull the summation into the matrices and note that $\sum_r {\lvert {r} \rangle}{\langle {r} \rvert} = \mathbf{1}$ (the two by two identity), so that we are left with \begin{aligned}\sum_r u_r \bar{u}_r =\frac{1}{{2m}}\begin{bmatrix}{\left\lvert{E}\right\rvert} + m & -\boldsymbol{\sigma} \cdot \mathbf{p} \\ \boldsymbol{\sigma} \cdot \mathbf{p} &-\frac{\mathbf{p}^2}{{\left\lvert{E}\right\rvert} + m} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.39) Similarly, we find \begin{aligned}-\sum_r v_r \bar{v}_r =\frac{1}{{2m}}\begin{bmatrix}-\frac{\mathbf{p}^2}{{\left\lvert{E}\right\rvert} + m} & \boldsymbol{\sigma} \cdot \mathbf{p} \\ -\boldsymbol{\sigma} \cdot \mathbf{p} \right\rvert} + m \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.40) summing the two (noting that $E^2 - \mathbf{p}^2 - m^2 = 0$) we get the block identity matrix as desired. We’ve also just calculated the projection operators. Let’s verify that expanding the covariant form in the text produces the same result \begin{aligned}\frac{1}{{2m}}(m \pm \gamma \cdot p) &=\frac{1}{{2m}}(m \pm \gamma^4 {\left\lvert{E}\right\rvert} \pm \gamma_a p^a ) \\ &=\frac{1}{{2m}}\left(m\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\pm {\left\lvert{E}\right\rvert}\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}\pm p^a\begin{bmatrix}0 & -\sigma_a \\ \sigma_a & 0 \end{bmatrix}\right) \\ &=\frac{1}{{2m}}\begin{bmatrix}m \pm {\left\lvert{E}\right\rvert} & \mp \boldsymbol{\sigma} \cdot \mathbf{p} \\ \pm \boldsymbol{\sigma} \cdot \mathbf{p} & m \mp {\left\lvert{E}\right\rvert} \end{bmatrix}\end{aligned} Now compare to 3.39, and 3.40, which we rewrite using $-\mathbf{p}^2/(m + {\left\lvert{E}\right\rvert}) = m - {\left\lvert{E}\right\rvert}$ as \begin{aligned}\sum_r u_r \bar{u}_r &=\frac{1}{{2m}}\begin{bmatrix}{\left\lvert{E}\right\rvert} + m & -\boldsymbol{\sigma} \cdot \mathbf{p} \\ \boldsymbol{\sigma} \cdot \mathbf{p} &m - {\left\lvert{E}\right\rvert}\end{bmatrix} \\ -\sum_r v_r \bar{v}_r &=\frac{1}{{2m}}\begin{bmatrix}m - {\left\lvert{E}\right\rvert} & \boldsymbol{\sigma} \cdot \mathbf{p} \\ -\boldsymbol{\sigma} \cdot \mathbf{p} \right\rvert} + m \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.41) ## Lorentz transformation of Dirac equation. Equation (35.107) in the text is missing the positional notation to show the placement of the indexes, and should be \begin{aligned}\left[{\Sigma},{\gamma^\nu}\right] = e_\mu^{.\nu} \gamma^\mu,\end{aligned} \hspace{\stretch{1}}(3.45) where the solution is \begin{aligned}\Sigma = \frac{1}{{4}} \gamma^\alpha \gamma^\beta e_{\alpha \beta}\end{aligned} \hspace{\stretch{1}}(3.45) This does have the form I’d expect, a bivector, but we can show explicitly that this is the solution without too much trouble. Consider the commutator \begin{aligned}\left[{ \gamma^\alpha \gamma^\beta e_{\alpha \beta} },{\gamma^\nu}\right]&=e_{\alpha \beta} \left[{ \gamma^\alpha \gamma^\beta },{\gamma^\nu}\right] \\ &=e_{\alpha \beta} \left( \gamma^\alpha \gamma^\beta \gamma^\nu-\gamma^\nu \gamma^\alpha \gamma^\beta \right) \\ &=e_{\alpha \beta} \left( \left( {\gamma^\alpha \cdot \gamma^\beta }+\gamma^\alpha \wedge \gamma^\beta \right) \gamma^\nu-\gamma^\nu \left( {\gamma^\alpha \cdot \gamma^\beta }+\gamma^\alpha \wedge \gamma^\beta \right) \right) \\ &=e_{\alpha \beta} \left(\left(\gamma^\alpha \wedge \gamma^\beta \right)\gamma^\nu-\gamma^\nu \left(\gamma^\alpha \wedge \gamma^\beta \right)\right)\\ &=e_{\alpha \beta} \left(\left(\gamma^\alpha \wedge \gamma^\beta \right) \wedge \gamma^\nu-\gamma^\nu \wedge \left(\gamma^\alpha \wedge \gamma^\beta \right)\right)+e_{\alpha \beta} \left(\left(\gamma^\alpha \wedge \gamma^\beta \right) \cdot \gamma^\nu-\gamma^\nu \cdot \left(\gamma^\alpha \wedge \gamma^\beta \right)\right)\\ &=2 e_{\alpha \beta} \left(\gamma^\alpha \wedge \gamma^\beta \right) \cdot \gamma^\nu\\ &=2 e^{\alpha \beta} \left(\gamma_\alpha \wedge \gamma_\beta \right) \cdot \gamma^\nu\\ &=2 e^{\alpha \beta} \left(\gamma_\alpha \delta_\beta^{.\nu}-\gamma_\beta \delta_\alpha^{.\nu}\right)\\ &=4 e^{\alpha \nu} \gamma_\alpha \\ &=4 e_\alpha^{.\nu} \gamma^\alpha \\ \end{aligned} Would this be any easier to prove without utilizing the dot and wedge product identities? I used a few of them, starting with \begin{aligned}a \cdot b &= \frac{1}{{2}} (a b + b a) = \frac{1}{{2}} \left\{{a},{b}\right\} \\ a \wedge b &= \frac{1}{{2}} (a b - b a) = \frac{1}{{2}} \left[{a},{b}\right] \\ a b &= a \cdot b + a \wedge b = \frac{1}{{2}} ( \left\{{a},{b}\right\} + \left[{a},{b}\right] )\end{aligned} \hspace{\stretch{1}}(3.45) In matrix notation we would have to show that the anticommutator $\left\{{\gamma^\alpha},{\gamma^\beta}\right\}$ commutes with any $\gamma^\nu$ to make the first cancellation. We can do so by noting \begin{aligned}\left[{\gamma^\alpha \gamma^\beta + \gamma^\beta \gamma^\alpha},{\gamma^\nu}\right] &= \left[{ 2 g^{\alpha \beta} \mathbf{1}},{\gamma^\nu}\right] \\ &= 2 g^{\alpha \beta} \left[{\mathbf{1}},{\gamma^\nu}\right] \\ &= 0\end{aligned} That’s enough to get us on the path to how to prove this in matrix form \begin{aligned}\left[{ \gamma^\alpha \gamma^\beta e_{\alpha \beta} },{\gamma^\nu}\right]&=e_{\alpha \beta} \left[{ \gamma^\alpha \gamma^\beta },{\gamma^\nu}\right] \\ &=e_{\alpha \beta} \left( \gamma^\alpha \gamma^\beta \gamma^\nu-\gamma^\nu \gamma^\alpha \gamma^\beta \right) \\ &=\frac{1}{{2}} e_{\alpha \beta} \left( \left( \left\{{\gamma^\alpha},{\gamma^\beta}\right\}+\left[{\gamma^\alpha },{ \gamma^\beta }\right]\right) \gamma^\nu-\gamma^\nu \left( \left\{{\gamma^\alpha },{\gamma^\beta }\right\}+\left[{\gamma^\alpha },{\gamma^\beta }\right]\right) \right) \\ &=\frac{1}{{2}} e_{\alpha \beta} \left( \left[{\gamma^\alpha },{ \gamma^\beta }\right] \gamma^\nu-\gamma^\nu \left[{\gamma^\alpha },{ \gamma^\beta }\right] \right) \\ &=\frac{1}{{2}} e_{\alpha \beta} \left[{\left[{\gamma^\alpha },{ \gamma^\beta }\right] },{\gamma^\nu}\right] \\ &=\frac{1}{{2}} e_{\alpha \beta} \left[{\left[{\gamma^\alpha },{ \gamma^\beta }\right] },{\gamma^\nu}\right] \\ &=\frac{1}{{2}} e_{\alpha \beta} \left[{\gamma^\alpha \gamma^\beta -\gamma^\beta \gamma^\alpha },{\gamma^\nu}\right] \\ &=e_{\alpha \beta} \left[{\gamma^\alpha \gamma^\beta },{\gamma^\nu}\right] \\ &=e_{\alpha \beta} \left(\gamma^\alpha \gamma^\beta \gamma^\nu-\gamma^\nu \gamma^\alpha \gamma^\beta \right) \\ &=e_{\alpha \beta} \left(\gamma^\alpha ( 2 g^{\beta \nu} - \gamma^\nu \gamma^\beta )-\gamma^\nu \gamma^\alpha \gamma^\beta \right) \\ &=e_{\alpha \beta} \left(2 \gamma^\alpha g^{\beta \nu} - \gamma^\alpha \gamma^\nu \gamma^\beta -\gamma^\nu \gamma^\alpha \gamma^\beta \right) \\ &=2 e_{\alpha \beta} \left(\gamma^\alpha g^{\beta \nu} - g^{\alpha \nu} \gamma^\beta \right) \\ &=2 e_{\alpha \beta} \gamma^\alpha g^{\beta \nu} + 2 e_{\beta \alpha} g^{\alpha \nu} \gamma^\beta \\ &=2 e_{\alpha}^{. \nu} \gamma^\alpha + 2 e_{\beta}^{.\nu} \gamma^\beta \\ &=4 e_{\alpha}^{. \nu} \gamma^\alpha \end{aligned} # References [1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009. Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment » ## some cpu info collection methods for HPUX. Posted by peeterjoot on June 7, 2011 This link shows lots of ways to collect cpu info on hpux, but some of them appear to require root. Here’s a couple that work for me as a non-root user  grep processor /var/adm/syslog/syslog.log
Apr 17 00:44:17 mesh vmunix: 120 processor
Apr 17 00:44:17 mesh vmunix: 121 processor
Apr 17 00:44:17 mesh vmunix: 122 processor
Apr 17 00:44:17 mesh vmunix: 123 processor

$ioscan -fnkC processor Class I H/W Path Driver S/W State H/W Type Description =================================================================== processor 0 120 processor CLAIMED PROCESSOR Processor processor 1 121 processor CLAIMED PROCESSOR Processor processor 2 122 processor CLAIMED PROCESSOR Processor processor 3 123 processor CLAIMED PROCESSOR Processor  This one appears to be the best way, since we get the CPU type too (1005 is what I believe to be a really old cpu version) $ echo "map selall info;wait infolog" | /usr/sbin/cstm | grep -i cpu
28  120                  CPU (1005)
29  121                  CPU (1005)
30  122                  CPU (1005)
31  123                  CPU (1005)


Posted in Development environment | Tagged: | 1 Comment »