## (INCOMPLETE) Geometry of Maxwell radiation solutions

Posted by peeterjoot on August 18, 2009

[Click here for a PDF of this post with nicer formatting]

# Motivation

We have in GA multiple possible ways to parametrize an oscillatory time dependence for a radiation field.

This was going to be an attempt to systematically solve the resulting eigen-multivector problem, starting with the a exponential time parametrization, but I got stuck part way. Perhaps using a plain old would work out better, but I’ve spent more time on this than I want for now.

# Setup. The eigenvalue problem.

Again following Jackson ([1]), we use CGS units. Maxwell’s equation in these units, with is

With an assumed oscillatory time dependence

Maxwell’s equation reduces to a multivariable eigenvalue problem

We have some flexibility in picking the imaginary. As well as a non-geometric imaginary typically used for a phasor representation where we take real parts of the field, we have additional possibilities, two of which are

The first is the spatial pseudoscalar, which commutes with all vectors and bivectors. The second is the unit bivector for the transverse plane, here parametrized by duality using the perpendicular to the plane direction .

Let’s examine the geometry required of the object for each of these two geometric modeling choices.

# Using the transverse plane bivector for the imaginary.

Assuming no prior assumptions about let’s allow for the possibility of scalar, vector, bivector and pseudoscalar components

Writing , an expansion of this product separated into grades is

By construction has only vector and bivector grades, so a requirement for zero scalar and pseudoscalar for all means that we have four immediate constraints (with .)

Since we have the flexibility to add or subtract any scalar multiple of to we can write where . Our field can now be written as just

We can similarly require , leaving

So, just the geometrical constraints give us

The first thing to be noted is that this phasor representation utilizing for the imaginary the transverse plane bivector cannot be the most general. This representation allows for only transverse fields! This can be seen two ways. Computing the transverse and propagation field components we have

The computation for the transverse field shows that as expected since the propagation component is zero.

Another way to observe this is from the split of into electric and magnetic field components. From (8) we have

The space containing each of the and vectors lies in the span of the transverse plane. We also see that there’s some potential redundancy in the representation visible here since we have four vectors describing this span , , , and , instead of just two.

## General wave packet.

If (1) were a scalar equation for it can be readily shown using Fourier transforms the field propagation in time given initial time description of the field is

In traditional complex algebra the vector exponentials would not be well formed. We do not have the problem in the GA formalism, but this does lead to a contraction since the resulting cannot be scalar valued. However, by using this as a motivational tool, and

also using assumed structure for the discrete frequency infinite wavetrain phasor, we can guess that a transverse only (to -axis) wave packet may be described by a single direction variant of the Fourier result above. That is

Since (13) has the same form as the earlier single frequency phasor test solution, we now know that is required to anticommute with . Application of Maxwell’s equation to this test solution gives us

This means that must satisfy the gradient eigenvalue equation for all

Observe that this is the single frequency problem of equation (3), so for mono-directional light we can consider the infinite wave train instead of a wave packet with no loss of generality.

## Applying separation of variables.

While this may not lead to the most general solution to the radiation problem, the transverse only propagation problem is still one of interest. Let’s see where this leads. In order to reduce the scope of the problem by one degree of freedom, let’s split out the component of the gradient, writing

Also introduce a product split for separation of variables for the dependence. That is

Again we are faced with the problem of too many choices for the grades of each of these factors. We can pick one of these, say , to have only scalar and pseudoscalar grades so that the two factors commute. Then we have

With in an algebra isomorphic to the complex numbers, it is necessarily invertible (and commutes with it’s derivative). Similar arguments to the grade fixing for show that has only vector and bivector grades, but does have the inverse required to do the separation of variables? Let’s blindly suppose that we can do this (and if we can’t we can probably fudge it since we multiply again soon after). With some rearranging we have

We want to separately equate these to a constant. In order to commute these factors we’ve only required that have only scalar and pseudoscalar grades, so for the constant let’s pick an arbitrary element in this subspace. That is

The solution for the factor in the separation of variables is thus

For the separation of variables gives us

We’ve now reduced the problem to something like a two variable eigenvalue problem, where the differential operator to find eigenvectors for is the transverse gradient . We unfortunately have an untidy split of the eigenvalue into left and right hand factors.

While the product was transverse only, we’ve now potentially lost that nice property for itself, and do not know if is strictly commuting or anticommuting with . Assuming either possibility for now, we can split this multivector into transverse and propagation direction fields

With this split, noting that , and a rearrangement of (20) produces

How do we find the eigen multivectors and ? A couple possibilities come to mind (perhaps not encompassing all solutions). One is for one of or to be zero, and the other to separately require both halves of (23) equal a constant, very much like separation of variables despite the fact that both of these functions and are functions of and . The easiest non-trivial path is probably letting both sides of (23) separately equal zero, so that we are left with two independent eigen-multivector problems to solve

Damn. have to mull this over. Don’t know where to go with it.

# References

[1] JD Jackson. *Classical Electrodynamics Wiley*. 2nd edition, 1975.

## Leave a Reply