# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 317,257

## Poincare transformations

Posted by peeterjoot on July 6, 2009

# Motivation

In ([1]) a Poincare transformation is used to develop the symmetric stress energy tensor directly, in contrast to the non-symmetric canonical stress energy tensor that results from spacetime translation.

Attempt to decode one part of this article, the use of a Poincare transformation.

# Incremental transformation in GA form.

Equation (11) in the article, is labeled an infinitesimal Poincare transformation

\begin{aligned}{x'}^\mu&={x'}^\mu+ {{\epsilon}^\mu}_\nu x^\nu+ {\epsilon}^\mu \end{aligned} \quad\quad\quad(1)

It is stated that an antisymmetrization condition $\epsilon_{\mu\nu} = -\epsilon_{\nu\mu}$. This is somewhat confusing since the infinitesimal transformation is given by a mixed upper and lower index tensor. Due to the antisymmetry perhaps this all a coordinate statement of the following vector to vector linear transformation

\begin{aligned}x' = x + \epsilon + A \cdot x \end{aligned} \quad\quad\quad(2)

This transformation is less restricted than a plain old spacetime transformation, as it also contains a projective term, where $x$ is projected onto the spacetime (or spatial) plane $A$ (a bivector), plus a rotation in that plane.

Writing as usual

\begin{aligned}x = \gamma_\mu x^\mu \end{aligned}

So that components are recovered by taking dot products, as in
\begin{aligned}x^\mu = x \cdot \gamma^\mu \end{aligned}

For the bivector term, write

\begin{aligned}A = c \wedge d = c^\alpha d^\beta (\gamma_\alpha \wedge \gamma_\beta) \end{aligned}

For
\begin{aligned}(A \cdot x ) \cdot \gamma^\mu &=c^\alpha d^\beta x_\sigma ((\gamma_\alpha \wedge \gamma_\beta) \cdot \gamma^\sigma) \cdot \gamma^\mu \\ &=c^\alpha d^\beta x_\sigma ( {\delta_\alpha}^\mu {\delta_\beta}^\sigma -{\delta_\beta}^\mu {\delta_\alpha}^\sigma ) \\ &=(c^\mu d^\sigma -c^\sigma d^\mu ) x_\sigma \end{aligned}

This allows for an identification $\epsilon^{\mu\sigma} = c^\mu d^\sigma -c^\sigma d^\mu$ which is antisymmetric as required. With that identification we can write (1) via the equivalent vector relation (2) if we write

\begin{aligned}{\epsilon^\mu}_\sigma x^\sigma = (c^\mu d_\sigma -c_\sigma d^\mu ) x^\sigma \end{aligned}

Where ${\epsilon^\mu}_\sigma$ is defined implicitly in terms of components of the bivector $A = c \wedge d$.

Is this what a Poincare transformation is? The Poincare Transformation article suggests not. This article suggests that the Poincare transformation is a spacetime translation plus a Lorentz transformation (composition of boosts and rotations). That Lorentz transformation will not be antisymmetric however, so how can these be reconciled? The key is probably the fact that this was an infinitesimal Poincare transformation so lets consider a Taylor expansion of the Lorentz boost or rotation rotor, considering instead a transformation of the following form

\begin{aligned}x' &= x + \epsilon + R x \tilde{R} \\ R \tilde{R} &= 1 \end{aligned} \quad\quad\quad(3)

In particular, let’s look at the Lorentz transformation in terms of the exponential form
\begin{aligned}R = e^{I \theta/2} \end{aligned}

Here $\theta$ is either the angle of rotation (when the bivector is a unit spatial plane such as $I = \gamma_k \wedge \gamma_m$), or a rapidity angle (when the bivector is a unit spacetime plane such as $I = \gamma_k \wedge \gamma_0$).

Ignoring the translation in (3) for now, to calculate the first order term in Taylor series we need

\begin{aligned}\frac{dx'}{d\theta} &= \frac{dR}{d\theta} x \tilde{R} +{R} x \frac{d\tilde{R}}{d\theta} \\ &= \frac{dR}{d\theta} \tilde{R} R x \tilde{R} +{R} x \tilde{R} R \frac{d\tilde{R}}{d\theta} \\ &=\frac{1}{2} ( \Omega x' + x' \tilde{\Omega} ) \end{aligned}

where
\begin{aligned}\frac{1}{2}\Omega = \frac{dR}{d\theta} \tilde{R} \end{aligned}

Now, what is the grade of the product $\Omega$? We have both $dR/d\theta$ and $R$ in $\{\bigwedge^0 \oplus \bigwedge^2\}$ so the product can only have even grades $\Omega \in \{\bigwedge^0 \oplus \bigwedge^2 \oplus \bigwedge^4\}$, but the unitary constraint on $R$ restricts this

Since $R \tilde{R} = 1$ the derivative of this is zero

\begin{aligned}\frac{dR}{d\theta} \tilde{R} + {R} \frac{d\tilde{R}}{d\theta} = 0 \end{aligned}

Or
\begin{aligned}\frac{dR}{d\theta} \tilde{R} = - \left( \frac{dR}{d\theta} \tilde{R} \right)^{\tilde{}} \end{aligned}

Antisymmetry rules out grade zero and four terms, leaving only the possibility of grade 2. That leaves

\begin{aligned}\frac{dx'}{d\theta} = \frac{1}{2}(\Omega x' - x' \Omega) = \Omega \cdot x' \end{aligned}

And the first order Taylor expansion around $\theta =0$ is

\begin{aligned}x'(d\theta) &\approx x'(\theta = 0) + ( \Omega d\theta ) \cdot x' \\ &= x + ( \Omega d\theta ) \cdot x' \end{aligned}

This has close to the postulated form in (2), but differs in one notable way. The dot product with the antisymmetric form $A = \frac{1}{2} \frac{dR}{d\theta} \tilde{R} d\theta$ is a dot product with $x'$ and not $x$! One can however invert the identity writing $x$ in terms of $x'$ (to first order)

\begin{aligned}x = x' - ( \Omega d\theta ) \cdot x' \end{aligned}

Replaying this argument in fast forward for the inverse transformation should give us a relation for $x'$ in terms of $x$ and the incremental Lorentz transform

\begin{aligned}x' &= R x \tilde{R} \\ \implies \\ x &= \tilde{R} x' {R} \end{aligned}

\begin{aligned}\frac{dx}{d\theta}&= \frac{d \tilde{R}}{d\theta} R \tilde{R} x' R + \tilde{R} x' R \tilde{R} \frac{d {R}}{d\theta} \\ &= \left(2 \frac{d \tilde{R}}{d\theta} R \right) \cdot x \end{aligned}

So we have our incremental transformation given by

\begin{aligned}x'= x - \left(2 \frac{d \tilde{R}}{d\theta} R d\theta \right) \cdot x \end{aligned} \quad\quad\quad(5)

# Consider a specific infinitesimal spatial rotation.

The signs and primes involved in arriving at (5) were a bit confusing. To firm things up a bit considering a specific example is called for.

For a rotation in the $x,y$ plane, we have

\begin{aligned}R &= e^{\gamma_1 \gamma_2 \theta/2} \\ x' &= R x \tilde{R} \end{aligned}

Here also it is easy to get the signs wrong, and it is worth pointing out the sign convention picked here for the Dirac basis is ${\gamma_0}^2 = -{\gamma_k}^2 = 1$. To verify that $R$ does the desired job, we have

\begin{aligned}R \gamma_1 \tilde{R}&=\gamma_1 \tilde{R^2} \\ &=\gamma_1 e^{\gamma_2 \gamma_1 \theta} \\ &=\gamma_1 (\cos\theta + \gamma_2 \gamma_1 \sin\theta) \\ &=\gamma_1 (\cos\theta - \gamma_1 \gamma_2 \sin\theta) \\ &=\gamma_1 \cos\theta + \gamma_2 \sin\theta \end{aligned}

and

\begin{aligned}R \gamma_2 \tilde{R}&=\gamma_2 \tilde{R^2} \\ &=\gamma_2 e^{\gamma_2 \gamma_1 \theta} \\ &=\gamma_2 (\cos\theta + \gamma_2 \gamma_1 \sin\theta) \\ &=\gamma_2 \cos\theta - \gamma_1 \sin\theta \end{aligned}

For $\gamma_3$ or $\gamma_0$, the quaternion $R$ commutes, so we have

\begin{aligned}R \gamma_3 \tilde{R} &= R \tilde{R} \gamma_3 = \gamma_3 \\ R \gamma_0 \tilde{R} &= R \tilde{R} \gamma_0 = \gamma_0 \end{aligned}

(leaving the perpendicular basis directions unchanged).

Summarizing the action on the basis vectors in matrix form this is

\begin{aligned}\begin{bmatrix}\gamma_0 \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \end{bmatrix}\rightarrow\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & \sin\theta & 0 \\ 0 & -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}\gamma_0 \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \end{bmatrix} \end{aligned}

Observe that the basis vectors transform with the transposed matrix to the coordinates, and we have

\begin{aligned}\gamma_0 x^0+ \gamma_1 x^1+ \gamma_2 x^2+ \gamma_3 x^3 \rightarrow\gamma_0 x^0+x^1 (\gamma_1 \cos\theta + \gamma_2 \sin\theta)+x^2 (\gamma_2 \cos\theta - \gamma_1 \sin\theta)+\gamma_3 x^3 \end{aligned}

Dotting ${x'}^\mu = x' \cdot \gamma^\mu$ we have

\begin{aligned}x^0 &\rightarrow x^0 \\ x^1 &\rightarrow x^1 \cos\theta - x^2 \sin\theta \\ x^2 &\rightarrow x^1 \sin\theta +x^2 \cos\theta \\ x^3 &\rightarrow x^3 \end{aligned}

In matrix form this is the expected and familiar rotation matrix in coordinate form
\begin{aligned}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix}\rightarrow\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix} \end{aligned}

Moving on to the initial verification we have

\begin{aligned}2 \frac{d\tilde{R}}{d\theta} &= 2\frac{d}{d\theta} e^{\gamma_2\gamma_1 \theta/2} \\ &= \gamma_1 \gamma_2 e^{\gamma_2\gamma_1 \theta/2} \end{aligned}

So we have

\begin{aligned}2 \frac{d\tilde{R}}{d\theta} R &= \gamma_2 \gamma_1 e^{\gamma_2\gamma_1 \theta/2} e^{\gamma_1\gamma_2 \theta/2} \\ &= \gamma_2 \gamma_1 \end{aligned}

The antisymmetric form $\epsilon_{\mu\nu}$ in this case therefore appears to be nothing more than the unit bivector for the plane of rotation! We should now be able to verify the incremental transformation result from (5), which is in this specific case now calculated to be

\begin{aligned}x'= x + d\theta (\gamma_1 \gamma_2) \cdot x \end{aligned} \quad\quad\quad(8)

As a final check let’s look at the action of rotation part of the transformation (8) on the coordinates $x^\mu$. Only the $x^1$ and $x^2$ coordinates need be considered since there is no projection of $\gamma_0$ or $\gamma_3$ components onto the plane $\gamma_1 \gamma_2$.

\begin{aligned}d\theta (\gamma_1 \gamma_2) \cdot (x^1 \gamma_1 + x^2 \gamma_2)&= d\theta {\left\langle{{ \gamma_1 \gamma_2 (x^1 \gamma_1 + x^2 \gamma_2) }}\right\rangle}_{1} \\ &= d\theta (\gamma_2 x^1 - \gamma_1 x^2) \end{aligned}

Now compare to the incremental transformation on the coordinates in matrix form. That is

\begin{aligned}\delta R&=d\theta \frac{d}{d\theta}{\left.\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\right\vert}_{\theta=0} \\ &=d\theta{\left.\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & -\sin\theta & -\cos\theta & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}\right\vert}_{\theta=0} \\ &=d\theta\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{aligned}

So acting on the coordinate vector

\begin{aligned}\delta R &= d\theta\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3\end{bmatrix} \\ &=d\theta\begin{bmatrix}0 \\ -x^2 \\ x^1 \\ 0\end{bmatrix} \end{aligned}

This is exactly what we got above with the bivector dot product. Good.

# Consider a specific infinitesimal boost.

For a boost along the $x$ axis we have

\begin{aligned}R &= e^{\gamma_0\gamma_1 \alpha/2} \\ x' &= R x \tilde{R} \end{aligned}

Verifying, we have

\begin{aligned}x^0 \gamma_0 &\rightarrow x^0 ( \cosh\alpha + \gamma_0 \gamma_1 \sinh\alpha ) \gamma_0 \\ &= x^0 ( \gamma_0 \cosh\alpha - \gamma_1 \sinh\alpha ) \end{aligned}

\begin{aligned}x^1 \gamma_1 &\rightarrow x^1 ( \cosh\alpha + \gamma_0 \gamma_1 \sinh\alpha ) \gamma_1 \\ &= x^1 ( \gamma_1 \cosh\alpha - \gamma_0 \sinh\alpha ) \end{aligned}

Dot products recover the familiar boost matrix
\begin{aligned}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix}'&=\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix} \end{aligned}

Now, how about the incremental transformation given by (5). A quick calculation shows that we have

\begin{aligned}x' = x + d\alpha (\gamma_0 \gamma_1) \cdot x \end{aligned} \quad\quad\quad(11)

Just like the (8) case for a rotation in the $x y$ plane, the antisymmetric form is again the unit bivector of the rotation plane (this time the unit bivector in the spacetime plane of the boost.)

This completes the examination of two specific incremental Lorentz transformations. It is clear that the result will be the same for an arbitrarily oriented bivector, and the original guess (2) of a geometric equivalent of tensor relation (1) was correct, provided that $A$ is a unit bivector scaled by the magnitude of the incremental transformation.

The specific case not treated however are those transformations where the orientation of the bivector is allowed to change. Parameterizing that by angle is not such an obvious procedure.

# In tensor form.

For an arbitrary bivector $A = a \wedge b$, we can calculate ${\epsilon^\sigma}_\alpha$. That is

\begin{aligned}\epsilon^{\sigma\alpha} x_\alpha&=d\theta\frac{((a^\mu \gamma_\mu \wedge b^\nu \gamma_\nu) \cdot ( x_\alpha \gamma^\alpha)) \cdot \gamma^\sigma }{{\left\lvert{((a^\mu \gamma_\mu) \wedge (b^\nu \gamma_\nu)) \cdot ((a_\alpha \gamma^\alpha) \wedge (b_\beta \gamma^\beta))}\right\rvert}^{1/2}} \\ &=\frac{ a^\sigma b^\alpha - a^\alpha b^\sigma }{{\left\lvert{a^\mu b^\nu( a_\nu b_\mu - a_\mu b_\nu)}\right\rvert}^{1/2}} x_\alpha \end{aligned}

So we have

\begin{aligned}{\epsilon^\sigma}_\alpha&=d\theta\frac{ a^\sigma b_\alpha - a_\alpha b^\sigma }{{\left\lvert{a^\mu b^\nu( a_\nu b_\mu - a_\mu b_\nu)}\right\rvert}^{1/2}} \end{aligned}

The denominator can be subsumed into $d\theta$, so the important factor is just the numerator, which encodes an incremental boost or rotational in some arbitrary spacetime or spatial plane (respectively). The associated antisymmetry can be viewed as a consequence of the bivector nature of the rotor derivative rotor product.

# References

[1] M. Montesinos and E. Flores. {Symmetric energy-momentum tensor in Maxwell, Yang-Mills, and Proca theories obtained using only Noether’s theorem}. Arxiv preprint hep-th/0602190, 2006.

1. ### Cartaniansaid

Very interesting, Peeter. The Lie algebra of the Poincare group generators includes
$\left[ M_{ab}, P_c \right]= \eta_{ac}P_b-\eta_{bc}P_a$
where M is a Lorentz generator and P the translation generators.
If we substitute

$M_{mn}= \gamma_m\gamma_n,\ \eta_{ab}=\gamma_a\gamma_b$

can we find P ?

2. ### Cartaniansaid

Whoops, I meant this,

$\eta_{ab}=\gamma_a \cdot \gamma_b$

• ### peeterjootsaid

I’m not yet familiar with Lie Algebras. What is the definition of $P_m$? I presume these are differential operators of some sort?

3. ### Cartaniansaid

I’m assuming that bivectors are the generators rotations and boosts. Expanding the first equation I wrote above for spatial rotations gives three equations

$(\gamma_1\gamma_2)P_3 - P_3(\gamma_1\gamma_2)=P_2-P_1$
$(\gamma_1\gamma_3)P_2 - P_2(\gamma_1\gamma_3)=P_3-P_1$
$(\gamma_2\gamma_3)P_1 - P_1(\gamma_2\gamma_3)=P_3-P_2$

Is there a scalar or vector or multivector that can satisfy this ? I think it has to a vector since it has 4 components.

• ### peeterjootsaid

By process of elimination.

1. Suppose $P$ is a vector, $P = \gamma^k P_k$, with $P_k$ scalars, then we’d have

\begin{aligned}\frac{1}{{2}} \left[{\gamma_a \wedge \gamma_b},{P}\right] &= (\gamma_a \wedge \gamma_b) \cdot (\gamma^k P_k) \\ &=(\gamma_a {\delta_b}^k - \gamma_b {\delta_a}^k ) P_k \\ &=\gamma_a P_b - \gamma_b P_a \end{aligned}

This is a vector, whereas your expression would be a scalar, so it doesn’t look like what you are after.

2. Suppose $P$ is a bivector, $P = (\gamma^k \wedge \gamma^m) P_{km}$, then the commutator will be a bivector not a vector. This is a messier expansion (perhaps there’s a clever way to do it, but I don’t know what it is).

\begin{aligned}\frac{1}{{2}} \left[{\gamma_a \wedge \gamma_b},{P}\right] &= {\left\langle{{ (\gamma_a \wedge \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ (\gamma_a \gamma_b - \gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) }}\right\rangle}_{2} P_{km} \\ &= {\left\langle{{ \gamma_a (\gamma_b \cdot (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} + {\left\langle{{ \gamma_a (\gamma_b \wedge (\gamma^k \wedge \gamma^m)) }}\right\rangle}_{2} P_{km} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &= (\gamma_a \wedge \gamma^m) P_{b m} -(\gamma_a \wedge \gamma^k) P_{k b} - (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} \\ &+ (\gamma_a \cdot \gamma_b) (\gamma^k \wedge \gamma^m) P_{km} - (\gamma_b \wedge \gamma^m) P_{a m} + (\gamma_b \wedge \gamma^k) P_{k a} \\ &= (\gamma_a \wedge \gamma^c) (P_{b c} -P_{c b})+ (\gamma_b \wedge \gamma^c) (P_{c a} -P_{a c} )\\ &= \eta_{ad}(\gamma^d \wedge \gamma^c) (P_{b c} -P_{c b})- \eta_{bd} (\gamma^d \wedge \gamma^c) ( P_{a c} -P_{c a} )\\ \end{aligned}

If you let

\begin{aligned}{P_m}^d = 2 (\gamma^d \wedge \gamma^c) (P_{m c} -P_{c m}) \end{aligned}

you have

\begin{aligned}\left[{\gamma_a \wedge \gamma_b},{P}\right] = \eta_{ad} {P_b}^d - \eta_{bd} {P_a}^d \end{aligned}

This looks something like what you are after, but has one too many non-free indexes. That leaves just a trivector bivector product to consider … but I’d rather go for a hot tub right now then consider that case;)

4. ### Cartaniansaid

Given these equations,

\begin{aligned} P_aP_b-P_bP_a &= 0\\ (\gamma_a\gamma_b)P_c-P_c(\gamma_a\gamma_b) &= \eta_{ac}P_b-\eta_{bc}P_a \end{aligned}
I think there’s a solution of the form
\begin{aligned} \left[ \begin{array}{c} P_0 \\\ P_1 \\\ P_2 \\\ P_3 \end{array} \right] = \left[ \begin{array}{cccc} 0 & f_{01} & f_{02} & f_{03}\\\ f_{10} & 0 & f_{12} & f_{13}\\\ f_{20} & f_{21} & 0 & f_{23}\\\ f_{30} & f_{31} & f_{32} & 0 \end{array} \right] \left[ \begin{array}{c} \gamma_0 \\\ \gamma_1 \\\ \gamma_2 \\\ \gamma_3 \end{array} \right] \end{aligned}
I think this makes physical sense, because the action of a P will be written
as an antisymmetric function which will mean P_a acts in the ‘a’ direction as it should.
I’ll try and explicate this later.

5. ### Cartaniansaid

Does the latext have to be in one line ?

latex\begin{align}P_aP_b-P_bP_a &= 0\\(\gamma_a\gamma_b)P_c-P_c(\gamma_a\gamma_b) &= \eta_{ac}P_b\eta_{bc}P_a\end{align}

• ### peeterjootsaid

yup, one line, and you have to use {aligned} instead of {align}. I edited your initial comment. If you want, I have a script that converts standalone latex to wordpress latex here:

tex2blog

There’s some other such scripts around, but I didn’t find one that handled multiple argument latex macros well (and I didn’t know php well enough to try to muck with them).

I give up.

7. ### Cartaniansaid

Hi Peeter,
your workings in post 3 are what’s needed. I’ve been attempting the same but with less progress. It’s not as straightforward as I thought. The P’s also have to satisfy

$[P_a,P_b]=0$

‘Cartanian’

8. ### Vector Dot Product | Engineer Spheresaid

[…] Poincare transformations […]