Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

PHY450H1S. Relativistic Electrodynamics Lecture 6 (Taught by Prof. Erich Poppitz). Four vectors and tensors.

Posted by peeterjoot on January 25, 2011

[Click here for a PDF of this post with nicer formatting]


Still covering chapter 1 material from the text [1].

Covering Professor Poppitz’s lecture notes: nonrelativistic limit of boosts (33); number of parameters of Lorentz transformations (34-35); introducing four-vectors, the metric tensor, the invariant “dot-product and SO(1,3) (36-40); the Poincare group (41); the convenience of “upper” and “lower” indices (42-43); tensors (44)

The Special Orthogonal group (for Euclidean space).

Lorentz transformations are like “rotations” for (t, x, y, z) that preserve (ct)^2 - x^2 - y^2 - z^2. There are 6 continuous parameters:

\item 3 rotations in x,y,z space
\item 3 “boosts” in x or y or z.

For rotations of space we talk about a group of transformations of 3D Euclidean space, and call this the S0(3) group. Here S is for Special, O for Orthogonal, and 3 for the dimensions.

For a transformed vector in 3D space we write

\begin{aligned}\begin{bmatrix}x \\ y \\ z\end{bmatrix} \rightarrow \begin{bmatrix}x \\ y \\ z\end{bmatrix}' = O \begin{bmatrix}x \\ y \\ z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here O is an orthogonal 3 \times 3 matrix, and has the property

\begin{aligned}O^T O = \mathbf{1}.\end{aligned} \hspace{\stretch{1}}(2.2)

Taking determinants, we have

\begin{aligned}\det{ O^T } \det{ O} = 1,\end{aligned} \hspace{\stretch{1}}(2.3)

and since \det{O^\text{T}} = \det{ O }, we have

\begin{aligned}(\det{O})^2 = 1,\end{aligned} \hspace{\stretch{1}}(2.4)

so our determinant must be

\begin{aligned}\det O = \pm 1.\end{aligned} \hspace{\stretch{1}}(2.5)

We work with the positive case only, avoiding the transformations that include reflections.

The Unitary condition O^\text{T} O = 1 is an indication that the inner product is preserved. Observe that in matrix form we can write the inner product

\begin{aligned}\mathbf{r}_1 \cdot \mathbf{r}_2 = \begin{bmatrix}x_1 & y_1 & z_1\end{bmatrix}\begin{bmatrix}x_1 \\ y_2 \\ x_3 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.6)

For a transformed vector X' = O X, we have {X'}^\text{T} = X^\text{T} O^\text{T}, and

\begin{aligned}X' \cdot X' = (X^\text{T} O^\text{T}) (O X) = X^\text{T} (O^\text{T} O) X = X^T X = X \cdot X\end{aligned} \hspace{\stretch{1}}(2.7)

The Special Orthogonal group (for spacetime).

This generalizes to Lorentz boosts! There are two differences

\item Lorentz transforms should be 4 \times 4 not 3 \times 3 and act in (ct, x, y, z), and NOT (x,y,z).
\item They should leave invariant NOT \mathbf{r}_1 \cdot \mathbf{r}_2, but c2 t_2 t_1 - \mathbf{r}_2 \cdot \mathbf{r}_1.

Don’t get confused that I demanded c^2 t_2 t_1 - \mathbf{r}_2 \cdot \mathbf{r}_1 = \text{invariant} rather than c^2 (t_2 - t_1)^2 - (\mathbf{r}_2 - \mathbf{r}_1)^2 = \text{invariant}. Expansion of this (squared) interval, provides just this four vector dot product and its invariance condition

\begin{aligned}\text{invariant} &=c^2 (t_2 - t_1)^2 - (\mathbf{r}_2 - \mathbf{r}_1)^2 \\ &=(c^2 t_2^2 - \mathbf{r}_2^2) + (c^2 t_2^2 - \mathbf{r}_2^2)- 2 c^2 t_2 t_1 + 2 \mathbf{r}_1 \cdot \mathbf{r}_2.\end{aligned}

Observe that we have the sum of two invariants plus our new cross term, so this cross term, (-2 times our dot product to be defined), must also be an invariant.

Introduce the four vector

\begin{aligned}x^0 &= ct \\ x^1 &= x \\ x^2 &= y \\ x^3 &= z \end{aligned}

Or (x^0, x^1, x^2, x^3) = \{ x^i, i = 0,1,2,3 \}.

We will also write

\begin{aligned}x^i &= (ct, \mathbf{r}) \\ \tilde{x}^i &= (c\tilde{t}, \tilde{\mathbf{r}})\end{aligned}

Our inner product is

\begin{aligned}c^2 t \tilde{t} - \mathbf{r} \cdot \tilde{\mathbf{r}}\end{aligned} \hspace{\stretch{1}}(3.8)

Introduce the 4 \times 4 matrix

\begin{aligned} \left\lVert{g_{ij}}\right\rVert = \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.9)

This is called the Minkowski spacetime metric.


\begin{aligned}c^2 t \tilde{t} - \mathbf{r} \cdot \tilde{\mathbf{r}}&\equiv \sum_{i, j = 0}^3 \tilde{x}^i g_{ij} x^j \\ &= \sum_{i, j = 0}^3 \tilde{x}^i g_{ij} x^j \\ & \tilde{x}^0 x^0 -\tilde{x}^1 x^1 -\tilde{x}^2 x^2 -\tilde{x}^3 x^3 \end{aligned}

\paragraph{Einstein summation convention}. Whenever indexes are repeated that are assumed to be summed over.

We also write

\begin{aligned}X = \begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.10)

\begin{aligned}\tilde{X} = \begin{bmatrix}\tilde{x}^0 \\ \tilde{x}^1 \\ \tilde{x}^2 \\ \tilde{x}^3 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.11)

\begin{aligned}G = \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.12)

Our inner product

\begin{aligned}c^2 t \tilde{t} - \tilde{\mathbf{r}} \cdot \mathbf{r} = \tilde{X}^\text{T} G X &=\begin{bmatrix}\tilde{x}^0 & \tilde{x}^1 & \tilde{x}^2 & \tilde{x}^3 \end{bmatrix}\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}\begin{bmatrix}\tilde{x}^0 \\ \tilde{x}^1 \\ \tilde{x}^2 \\ \tilde{x}^3 \\ \end{bmatrix}\end{aligned}

Under Lorentz boosts, we have

\begin{aligned}X = \hat{O} X',\end{aligned} \hspace{\stretch{1}}(3.13)


\begin{aligned}\hat{O} =\begin{bmatrix}\gamma & - \gamma v_x/c  & 0 & 0 \\ - \gamma v_x/c & \gamma  & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.14)

(for x-direction boosts)

\tilde{X} = \hat{O} \tilde{X}' \tilde{X}^\text{T} = \tilde{X'}^\text{T} \hat{O}^\text{T} \hspace{\stretch{1}}(3.15)

But \hat{O} must be such that \tilde{X}^\text{T} G X is invariant. i.e.

\begin{aligned}\tilde{X} G X = {\tilde{X'}}^\text{T} (\hat{O}^\text{T} G \hat{O}) X' = {X'}^\text{T} (G) X' \qquad \forall X' \text{and} \tilde{X}' \end{aligned} \hspace{\stretch{1}}(3.16)

This implies

\begin{aligned}\boxed{\hat{O}^\text{T} G \hat{O} = G}\end{aligned} \hspace{\stretch{1}}(3.17)

Such \hat{O}‘s are called “pseudo-orthogonal”.

Lorentz transformations are represented by the set of all 4 \times 4 pseudo-orthogonal matrices.

In symbols

\begin{aligned}\hat{O}^T G \hat{O} = G\end{aligned} \hspace{\stretch{1}}(3.18)

Just as before we can take the determinant of both sides. Doing so we have

\begin{aligned}\det(\hat{O}^T G \hat{O}) = \det(\hat{O}^T) \det(G) \det(\hat{O}) = \det(G)\end{aligned} \hspace{\stretch{1}}(3.19)

The \det(G) terms cancel, and since \det(\hat{O}^T) = \det(\hat{O}), this leaves us with (\det(\hat{O}))^2 = 1, or

\begin{aligned}\det(\hat{O}) = \pm 1\end{aligned} \hspace{\stretch{1}}(3.20)

We take the \det 0 = +1 case only, so that the transformations do not change orientation (no reflection in space or time). This set of transformation forms the group


Special orthogonal, one time, 3 space dimensions.

Einstein relativity can be defined as the “laws of physics that leave four vectors invariant in the

\begin{aligned}SO(1,3) \times T^4\end{aligned}

symmetry group.

Here T^4 is the group of translations in spacetime with 4 continuous parameters. The complete group of transformations that form the group of relativistic physics has 10 = 3 + 3 + 4 continuous parameters.

This group is called the Poincare group of symmetry transforms.

More notation

Our inner product is written

\begin{aligned}\tilde{x}^i g_{ij} x^j\end{aligned} \hspace{\stretch{1}}(4.21)

but this is very cumbersome. The convenient way to write this is instead

\begin{aligned}\tilde{x}^i g_{ij} x^j = \tilde{x}_j x^j = \tilde{x}^i x_i\end{aligned} \hspace{\stretch{1}}(4.22)


\begin{aligned}x_i = g_{ij} x^j = g_{ji} x^j\end{aligned} \hspace{\stretch{1}}(4.23)

Note: A check that we should always be able to make. Indexes that are not summed over should be conserved. So in the above we have a free i on the LHS, and should have a non-summed i index on the RHS too (also lower matching lower, or upper matching upper).

Non-matched indexes are bad in the same sort of sense that an expression like

\begin{aligned}\mathbf{r} = 1\end{aligned} \hspace{\stretch{1}}(4.24)

isn’t well defined (assuming a vector space \mathbf{r} and not a multivector Clifford algebra that is;)

Example explicitly:

\begin{aligned}x_0 &= g_{0 0} x^0 = ct  \\ x_1 &= g_{1 j} x^j = g_{11} x^1 = -x^1 \\ x_2 &= g_{2 j} x^j = g_{22} x^2 = -x^2 \\ x_3 &= g_{3 j} x^j = g_{33} x^3 = -x^3\end{aligned}

We would not have objects of the form

\begin{aligned}x^i x^i = (ct)^2 + \mathbf{r}^2\end{aligned} \hspace{\stretch{1}}(4.25)

for example. This is not a Lorentz invariant quantity.

\paragraph{Lorentz scalar example:} \tilde{x}^i x_i
\paragraph{Lorentz vector example:} x^i

This last is also called a rank-1 tensor.

Lorentz rank-2 tensors: ex: g_{ij}

or other 2-index objects.

Why in the world would we ever want to consider two index objects. We aren’t just trying to be hard on ourselves. Recall from classical mechanics that we have a two index object, the inertial tensor.

In mechanics, for a rigid body we had the energy

\begin{aligned}T = \sum_{ij = 1}^3 \Omega_i I_{ij} \Omega_j\end{aligned} \hspace{\stretch{1}}(4.26)

The inertial tensor was this object

\begin{aligned}I_{ij} = \sum_{a = 1}^N m_a \left(\delta_{ij} \mathbf{r}_a^2 - r_{a_i} r_{a_j} \right)\end{aligned} \hspace{\stretch{1}}(4.27)

or for a continuous body

\begin{aligned}I_{ij} = \int \rho(\mathbf{r}) \left(\delta_{ij} \mathbf{r}^2 - r_{i} r_{j} \right)\end{aligned} \hspace{\stretch{1}}(4.28)

In electrostatics we have the quadrupole tensor, … and we have other such objects all over physics.

Note that the energy T of the body above cannot depend on the coordinate system in use. This is a general property of tensors. These are object that transform as products of vectors, as I_{ij} does.

We call I_{ij} a rank-2 3-tensor. rank-2 because there are two indexes, and 3 because the indexes range from 1 to 3.

The point is that tensors have the property that the transformed tensors transform as

\begin{aligned}I_{ij}' = \sum_{l, m = 1,2,3} O_{il} O_{jm} I_{lm}\end{aligned} \hspace{\stretch{1}}(4.29)

Another example: the completely antisymmetric rank 3, 3-tensor

\begin{aligned}\epsilon_{ijk}\end{aligned} \hspace{\stretch{1}}(4.30)


In Newtonian dynamics we have

\begin{aligned}m \dot{d}{\mathbf{r}} = \mathbf{f}\end{aligned} \hspace{\stretch{1}}(5.31)

An equation of motion should be expressed in terms of vectors. This equation is written in a way that shows that the law of physics is independent of the choice of coordinates. We can do this in the context of tensor algebra as well. Ironically, this will require us to explicitly work with the coordinate representation, but this work will be augmented by the fact that we require our tensors to transform in specific ways.

In Newtonian mechanics we can look to symmetries and the invariance of the action with respect to those symmetries to express the equations of motion. Our symmetries in Newtonian mechanics leave the action invariant with respect to spatial translation and with respect to rotation.

We want to express relativistic dynamics in a similar way, and will have to express the action as a Lorentz scalar. We are going to impose the symmetries of the Poincare group to determine the relativistic laws of dynamics, and the next task will be to consider the possibilities for our relativistic action, and see what that action implies for dynamics in a relativistic context.


[1] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: