# Reading.

Covering chapter 3 material from the text [1].

Covering lecture notes pp. 74-83: Lorentz transformation of the strength tensor (82) [Tuesday, Feb. 8] [extra reading for the mathematically minded: gauge field, strength tensor, and gauge transformations in differential form language, not to be covered in class (83)]

Covering lecture notes pp. 84-102: Lorentz invariants of the electromagnetic field (84-86); Bianchi identity and the first half of Maxwell’s equations (87-90)

# Chewing on the four vector form of the Lorentz force equation.

After much effort, we arrived at

or

## Elements of the strength tensor

\paragraph{Claim}: there are only 6 independent elements of this matrix (tensor)

This is a no-brainer, for we just have to mechanically plug in the elements of the field strength tensor

Recall

For the purely spatial index combinations we have

Written out explicitly, these are

We can compare this to the elements of

We see that

So we have

These can be summarized as simply

This provides all the info needed to fill in the matrix above

## Index raising of rank 2 tensor

To raise indexes we compute

### Justifying the raising operation.

To justify this consider raising one index at a time by applying the metric tensor to our definition of . That is

Now apply the metric tensor once more

This is, by definition . Since a rank 2 tensor has been defined as an object that transforms like the product of two pairs of coordinates, it makes sense that this particular tensor raises in the same fashion as would a product of two vector coordinates (in this case, it happens to be an antisymmetric product of two vectors, and one of which is an operator, but we have the same idea).

### Consider the components of the raised tensor.

## Back to chewing on the Lorentz force equation.

For the spatial components of the Lorentz force equation we have

But

Canceling the common terms, and switching to vector notation, we are left with

Now for the energy term. We have

Putting the final two lines into vector form we have

or

# Transformation of rank two tensors in matrix and index form.

## Transformation of the metric tensor, and some identities.

With

\paragraph{We claim:}

The rank two tensor transforms in the following sort of sandwich operation, and this leaves it invariant

To demonstrate this let’s consider a transformed vector in coordinate form as follows

We can thus write the equation in matrix form with

Our invariant for the vector square, which is required to remain unchanged is

This shows that we have a delta function relationship for the Lorentz transform matrix, when we sum over the first index

It appears we can put 3.37 into matrix form as

Now, if one considers that the transpose of a rotation is an inverse rotation, and the transpose of a boost leaves it unchanged, the transpose of a general Lorentz transformation, a composition of an arbitrary sequence of boosts and rotations, must also be a Lorentz transformation, and must then also leave the norm unchanged. For the transpose of our Lorentz transformation lets write

For the action of this on our position vector let’s write

so that our norm is

We must then also have an identity when summing over the second index

Armed with these facts on the products of and we can now consider the transformation of the metric tensor.

The rule (definition) supplied to us for the transformation of an arbitrary rank two tensor, is that this transforms as its indexes transform individually. Very much as if it was the product of two coordinate vectors and we transform those coordinates separately. Doing so for the metric tensor we have

However, by 3.42, we have , and we prove that

Finally, we wish to put the above transformation in matrix form, look more carefully at the very first line

which is

We see that this particular form of transformation, a sandwich between and , leaves the metric tensor invariant.

## Lorentz transformation of the electrodynamic tensor

Having identified a composition of Lorentz transformation matrices, when acting on the metric tensor, leaves it invariant, it is a reasonable question to ask how this form of transformation acts on our electrodynamic tensor ?

\paragraph{Claim:} A transformation of the following form is required to maintain the norm of the Lorentz force equation

where . Observe that our Lorentz force equation can be written exclusively in upper index quantities as

Because we have a vector on one side of the equation, and it transforms by multiplication with by a Lorentz matrix in SO(1,3)

The LHS of the Lorentz force equation provides us with one invariant

so the RHS must also provide one

Let’s look at the RHS in matrix form. Writing

we can rewrite the Lorentz force equation as

In this matrix formalism our invariant 3.49 is

If we compare this to the transformed Lorentz force equation we have

Our invariant for the transformed equation is

Thus the transformed electrodynamic tensor must satisfy the identity

With the substitution the LHS is

We’ve argued that is also a Lorentz transformation, thus

This is enough to make both sides of 3.54 match, verifying that this transformation does provide the invariant properties desired.

## Direct computation of the Lorentz transformation of the electrodynamic tensor.

We can construct the transformed field tensor more directly, by simply transforming the coordinates of the four gradient and the four potential directly. That is

By inspection we can see that this can be represented in matrix form as

# Four vector invariants

For three vectors and invariants are

For four vectors and invariants are

For what are the invariants? One invariant is

but this isn’t interesting since it is uniformly zero (product of symmetric and antisymmetric).

The two invariants are

and

where

i j k l=0123$ } \\ -1 & \quad \mbox{for odd permutations of $i j k l=0123$ } \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.61)$

We can show (homework) that

This first invariant serves as the action density for the Maxwell field equations.

There’s some useful properties of these invariants. One is that if the fields are perpendicular in one frame, then will be in any other.

From the first, note that if , the invariant is positive, and must be positive in all frames, or if in one frame, we can transform to a frame with only component, solve that, and then transform back. Similarly if in one frame, we can transform to a frame with only component, solve that, and then transform back.

# The first half of Maxwell’s equations.

\paragraph{Claim: } The source free portions of Maxwell’s equations are a consequence of the definition of the field tensor alone.

Given

where

This alone implies half of Maxwell’s equations. To show this we consider

This is the Bianchi identity. To demonstrate this identity, we’ll have to swap indexes, employ derivative commutation, and then swap indexes once more

This is the 4D analogue of

i.e.

Let’s do this explicitly, starting with

For the case we have

We must then have

This is just Gauss’s law for magnetism

Let’s do the spatial portion, for which we have three equations, one for each of

This implies

Referring back to the previous expansions of 2.6 and 2.17, we have

or

These are just the components of the Maxwell-Faraday equation

# Appendix. Some additional index gymnastics.

## Transposition of mixed index tensor.

Is the transpose of a mixed index object just a substitution of the free indexes? This wasn’t obvious to me that it would be the case, especially since I’d made an error in some index gymnastics that had me temporarily convinced differently. However, working some examples clears the fog. For example let’s take the transpose of 3.37.

If the transpose of a mixed index tensor just swapped the indexes we would have

From this it does appear that all we have to do is switch the indexes and we will write

We can consider a more general operation

So we see that we do just have to swap indexes.

## Transposition of lower index tensor.

We’ve saw above that we had

which followed by careful treatment of the transposition in terms of for which we defined a transpose operation. We assumed as well that

However, this does not have to be assumed, provided that , and . We see this by expanding this transposition in products of and

It would be worthwhile to go through all of this index manipulation stuff and lay it out in a structured axiomatic form. What is the minimal set of assumptions, and how does all of this generalize to non-diagonal metric tensors (even in Euclidean spaces).

## Translating the index expression of identity from Lorentz products to matrix form

A verification that the matrix expression 3.38, matches the index expression 3.37 as claimed is worthwhile. It would be easy to guess something similar like is instead the matrix representation. That was in fact my first erroneous attempt to form the matrix equivalent, but is the transpose of 3.38. Either way you get an identity, but the indexes didn’t match.

Since we have which do we pick to do this verification? This appears to be dictated by requirements to match lower and upper indexes on the summed over index. This is probably clearest by example, so let’s expand the products on the LHS explicitly

This matches the that we have on the RHS, and all is well.

# References

[1] L.D. Landau and E.M. Lifshitz. *The classical theory of fields*. Butterworth-Heinemann, 1980.