Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘phasor’

Plane wave solutions of Maxwell’s equation using Geometric Algebra

Posted by peeterjoot on September 3, 2012

[Click here for a PDF of this post with nicer formatting]

Motivation

Study of reflection and transmission of radiation in isotropic, charge and current free, linear matter utilizes the plane wave solutions to Maxwell’s equations. These have the structure of phasor equations, with some specific constraints on the components and the exponents.

These constraints are usually derived starting with the plain old vector form of Maxwell’s equations, and it is natural to wonder how this is done directly using Geometric Algebra. [1] provides one such derivation, using the covariant form of Maxwell’s equations. Here’s a slightly more pedestrian way of doing the same.

Maxwell’s equations in media

We start with Maxwell’s equations for linear matter as found in [2]

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1a)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = -\frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.1b)

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1c)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.1d)

We merge these using the geometric identity

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{a} + I \boldsymbol{\nabla} \times \mathbf{a} = \boldsymbol{\nabla} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.2.2)

where I is the 3D pseudoscalar I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3, to find

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} = -I \frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.3a)

\begin{aligned}\boldsymbol{\nabla} \mathbf{B} = I \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.3b)

We want dimensions of 1/L for the derivative operator on the RHS of 1.2.3b, so we divide through by \sqrt{\mu\epsilon} I for

\begin{aligned}-I \frac{1}{{\sqrt{\mu\epsilon}}} \boldsymbol{\nabla} \mathbf{B} = \sqrt{\mu\epsilon} \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.4)

This can now be added to 1.2.3a for

\begin{aligned}\left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

This is Maxwell’s equation in linear isotropic charge and current free matter in Geometric Algebra form.

Phasor solutions

We write the electromagnetic field as

\begin{aligned}F = \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that for vacuum where 1/\sqrt{\mu \epsilon} = c we have the usual F = \mathbf{E} + I c \mathbf{B}. Assuming a phasor solution of

\begin{aligned}\tilde{F} = F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.3.7)

where F_0 is allowed to be complex, and the actual field is obtained by taking the real part

\begin{aligned}F = \text{Real} \tilde{F} = \text{Real}(F_0) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t)-\text{Imag}(F_0) \sin(\mathbf{k} \cdot \mathbf{x} - \omega t).\end{aligned} \hspace{\stretch{1}}(1.3.8)

Note carefully that we are using a scalar imaginary i, as well as the multivector (pseudoscalar) I, despite the fact that both have the square to scalar minus one property.

We now seek the constraints on \mathbf{k}, \omega, and F_0 that allow this to be a solution to 1.2.5

\begin{aligned}0 = \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

As usual in the non-geometric algebra treatment, we observe that any such solution F to Maxwell’s equation is also a wave equation solution. In GA we can do so by right multiplying an operator that has a conjugate form,

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &= \left(\boldsymbol{\nabla} - \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \mu\epsilon \frac{\partial^2}{\partial t^2} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \frac{1}{{v^2}} \frac{\partial^2}{\partial t^2} \right) \tilde{F},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.10)

where v = 1/\sqrt{\mu\epsilon} is the speed of the wave described by this solution.

Inserting the exponential form of our assumed solution 1.3.7 we find

\begin{aligned}0 = -(\mathbf{k}^2 - \omega^2/v^2) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)},\end{aligned} \hspace{\stretch{1}}(1.3.11)

which implies that the wave number vector \mathbf{k} and the angular frequency \omega are related by

\begin{aligned}v^2 \mathbf{k}^2 = \omega^2.\end{aligned} \hspace{\stretch{1}}(1.3.12)

Our assumed solution must also satisfy the first order system 1.3.9

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i\left(\mathbf{e}_m k_m - \frac{\omega}{v}\right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i k ( \hat{\mathbf{k}} - 1 ) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.13)

The constraints on F_0 must then be given by

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) F_0.\end{aligned} \hspace{\stretch{1}}(1.3.14)

With

\begin{aligned}F_0 = \mathbf{E}_0 + I v \mathbf{B}_0,\end{aligned} \hspace{\stretch{1}}(1.3.15)

we must then have all grades of the multivector equation equal to zero

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) \left(\mathbf{E}_0 + I v \mathbf{B}_0\right).\end{aligned} \hspace{\stretch{1}}(1.3.16)

Writing out all the geometric products, noting that I commutes with all of \hat{\mathbf{k}}, \mathbf{E}_0, and \mathbf{B}_0 and employing the identity \mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b} we have

\begin{aligned}\begin{array}{l l l l l}0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 & - \mathbf{E}_0                   & + \hat{\mathbf{k}} \wedge \mathbf{E}_0 & I v \hat{\mathbf{k}} \cdot \mathbf{B}_0 \\   &                    & + I v \hat{\mathbf{k}} \wedge \mathbf{B}_0  & + I v \mathbf{B}_0          &\end{array}\end{aligned} \hspace{\stretch{1}}(1.3.17)

This is

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18a)

\begin{aligned}\mathbf{E}_0 =- \hat{\mathbf{k}} \times v \mathbf{B}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18b)

\begin{aligned}v \mathbf{B}_0 = \hat{\mathbf{k}} \times \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18c)

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{B}_0.\end{aligned} \hspace{\stretch{1}}(1.3.18d)

This and 1.3.12 describe all the constraints on our phasor that are required for it to be a solution. Note that only one of the two cross product equations in are required because the two are not independent. This can be shown by crossing \hat{\mathbf{k}} with 1.3.18b and using the identity

\begin{aligned}\mathbf{a} \times (\mathbf{a} \times \mathbf{b}) = - \mathbf{a}^2 \mathbf{b} + a (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \hspace{\stretch{1}}(1.3.19)

One can find easily that 1.3.18b and 1.3.18c provide the same relationship between the \mathbf{E}_0 and \mathbf{B}_0 components of F_0. Writing out the complete expression for F_0 we have

\begin{aligned}\begin{aligned}F_0 &= \mathbf{E}_0 + I v \mathbf{B}_0 \\ &=\mathbf{E}_0 + I \hat{\mathbf{k}} \times \mathbf{E}_0 \\ &=\mathbf{E}_0 + \hat{\mathbf{k}} \wedge \mathbf{E}_0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.20)

Since \hat{\mathbf{k}} \cdot \mathbf{E}_0 = 0, this is

\begin{aligned}F_0 = (1 + \hat{\mathbf{k}}) \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(1.3.21)

Had we been clever enough this could have been deduced directly from the 1.3.14 directly, since we require a product that is killed by left multiplication with \hat{\mathbf{k}} - 1. Our complete plane wave solution to Maxwell’s equation is therefore given by

\begin{aligned}\begin{aligned}F &= \text{Real}(\tilde{F}) = \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \\ \tilde{F} &= (1 \pm \hat{\mathbf{k}}) \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} \mp \omega t)} \\ 0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 \\ \mathbf{k}^2 &= \omega^2 \mu \epsilon.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.22)

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

Geometry of general Jones vector (problem 2.8)

Posted by peeterjoot on August 9, 2012

[Click here for a PDF of this post with nicer formatting]

Another problem from [1].

Problem

The general case is represented by the Jones vector

\begin{aligned}\begin{bmatrix}A \\ B e^{i\Delta}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.1.1)

Show that this represents elliptically polarized light in which the major axis of the ellipse makes an angle

\begin{aligned}\frac{1}{{2}} \tan^{-1} \left( \frac{2 A B \cos \Delta }{A^2 - B^2} \right),\end{aligned} \hspace{\stretch{1}}(1.1.2)

with the x axis.

Solution

Prior to attempting the problem as stated, let’s explore the algebra of a parametric representation of an ellipse, rotated at an angle \theta as in figure (1). The equation of the ellipse in the rotated coordinates is

Figure 1: Rotated ellipse

 

\begin{aligned}\begin{bmatrix}x' \\ y'\end{bmatrix}=\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.3)

which is easily seen to have the required form

\begin{aligned}\left( \frac{x'}{a} \right)^2+\left( \frac{y'}{b} \right)^2 = 1.\end{aligned} \hspace{\stretch{1}}(1.2.4)

We’d like to express x' and y' in the “fixed” frame. Consider figure (2) where our coordinate conventions are illustrated. With

Figure 2: 2d rotation of frame

 

\begin{aligned}\begin{bmatrix}\hat{\mathbf{x}}' \\ \hat{\mathbf{y}}'\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{x}} e^{\hat{\mathbf{x}} \hat{\mathbf{y}} \theta} \\ \hat{\mathbf{y}} e^{\hat{\mathbf{x}} \hat{\mathbf{y}} \theta}\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{x}} \cos \theta + \hat{\mathbf{y}} \sin\theta \\ \hat{\mathbf{y}} \cos \theta - \hat{\mathbf{x}} \sin\theta\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.5)

and x \hat{\mathbf{x}} + y\hat{\mathbf{y}} = x' \hat{\mathbf{x}} + y' \hat{\mathbf{y}} we find

\begin{aligned}\begin{bmatrix}x' \\ y'\end{bmatrix}=\begin{bmatrix}\cos \theta & \sin\theta \\ -\sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.22)

so that the equation of the ellipse can be stated as

\begin{aligned}\begin{bmatrix}\cos \theta & \sin\theta \\ -\sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.7)

or

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}\cos \theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix}=\begin{bmatrix}a \cos \theta \cos u - b \sin \theta \sin u \\ a \sin \theta \cos u + b \cos \theta \sin u\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2.8)

Observing that

\begin{aligned}\cos u + \alpha \sin u = \text{Real}\left( (1 + i \alpha) e^{-i u} \right)\end{aligned} \hspace{\stretch{1}}(1.2.9)

we have, with \text{atan2} = \text{atan2}(x, y) a Jones vector representation of our rotated ellipse

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\text{Real}\begin{bmatrix}( a \cos \theta - i b \sin\theta ) e^{-iu} \\ ( a \sin \theta + i b \cos\theta ) e^{-iu}\end{bmatrix}=\text{Real}\begin{bmatrix}\sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta } e^{i \text{atan2}(a \cos\theta, -b\sin\theta) - i u} \\ \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } e^{i \text{atan2}(a \sin\theta, b\cos\theta) - i u}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2.10)

Since we can absorb a constant phase factor into our -iu argument, we can write this as

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\text{Real}\left(\begin{bmatrix}\sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta } \\ \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } e^{i \text{atan2}(a \sin\theta, b\cos\theta) -i \text{atan2}(a \cos\theta, -b\sin\theta)} \end{bmatrix} e^{-i u'}\right).\end{aligned} \hspace{\stretch{1}}(1.2.11)

This has the required form once we make the identifications

\begin{aligned}A = \sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta }\end{aligned} \hspace{\stretch{1}}(1.2.12)

\begin{aligned}B = \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } \end{aligned} \hspace{\stretch{1}}(1.2.13)

\begin{aligned}\Delta =\text{atan2}(a \sin\theta, b\cos\theta) - \text{atan2}(a \cos\theta, -b\sin\theta).\end{aligned} \hspace{\stretch{1}}(1.2.14)

What isn’t obvious is that we can do this for any A, B, and \Delta. Portions of this problem I tried in Mathematica starting from the elliptic equation derived in section 8.1.3 of [2]. I’d used Mathematica since on paper I found the rotation angle that eliminated the cross terms to always be 45 degrees, but this turns out to have been because I’d first used a change of variables that scaled the equation. Here’s the whole procedure without any such scaling to arrive at the desired result for this problem. Our starting point is the Jones specified field, again as above I’ve using -iu = i (k z - \omega t)

\begin{aligned}\mathbf{E} = \text{Real}\left( \begin{bmatrix}A \\ B e^{i \Delta}\end{bmatrix}e^{-i u}\right)=\begin{bmatrix}A \cos u \\ B \cos ( \Delta - u )\end{bmatrix}e^{-i u}\end{aligned} \hspace{\stretch{1}}(1.2.15)

We need our cosine angle addition formula

\begin{aligned}\cos( a + b ) = \text{Real} \left( (\cos a + i \sin a)(\cos b + i \sin b)\right) =\cos a \cos b - \sin a \sin b.\end{aligned} \hspace{\stretch{1}}(1.2.16)

Using this and writing \mathbf{E} = (x, y) we have

\begin{aligned}x = A \cos u\end{aligned} \hspace{\stretch{1}}(1.2.17)

\begin{aligned}y = B ( \cos \Delta \cos u + \sin \Delta \sin u ).\end{aligned} \hspace{\stretch{1}}(1.2.18)

Subtracting x \cos \Delta/A from y/B we have

\begin{aligned}\frac{y}{B} - \frac{x}{A} \cos \Delta = \sin \Delta \sin u.\end{aligned} \hspace{\stretch{1}}(1.2.27)

Squaring this and using \sin^2 u = 1 - \cos^2 u, and 1.2.17 we have

\begin{aligned}\left( \frac{y}{B} - \frac{x}{A} \cos \Delta \right)^2 = \sin^2 \Delta \left( 1 - \frac{x^2}{A^2} \right),\end{aligned} \hspace{\stretch{1}}(1.2.27)

which expands and simplifies to

\begin{aligned}\left( \frac{x}{A} \right)^2 +\left( \frac{y}{B} \right)^2 - 2 \left( \frac{x}{A} \right)\left( \frac{y}{B} \right)\cos \Delta = \sin^2 \Delta,\end{aligned} \hspace{\stretch{1}}(1.2.27)

which is an equation of a rotated ellipse as desired. Let’s figure out the angle of rotation required to kill the cross terms. Writing a = 1/A, b = 1/B and rotating our primed coordinate frame by \theta degrees

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}\cos \theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x' \\ y'\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.22)

we have

\begin{aligned}\begin{aligned}\sin^2 \Delta &=a^2 (x' \cos \theta - y'\sin\theta)^2+b^2 ( x' \sin\theta + y' \cos\theta)^2 \\ &- 2 a b (x' \cos \theta - y'\sin\theta)( x'\sin\theta + y'\cos\theta)\cos \Delta \\ &=(x')^2 ( a^2 \cos^2 \theta + b^2 \sin^2 \theta - 2 a b \cos \theta \sin \theta \cos \Delta ) \\ &+(y')^2 ( a^2 \sin^2 \theta + b^2 \cos^2 \theta + 2 a b \cos \theta \sin \theta \cos \Delta ) \\ &+ 2 x' y' ( (b^2 -a^2) \cos \theta \sin\theta + a b (\sin^2 \theta - \cos^2 \theta) \cos \Delta ).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.27)

To kill off the cross term we require

\begin{aligned}\begin{aligned}0 &= (b^2 -a^2) \cos \theta \sin\theta + a b (\sin^2 \theta - \cos^2 \theta) \cos \Delta \\ &= \frac{1}{{2}} (b^2 -a^2) \sin (2 \theta) - a b \cos (2 \theta) \cos \Delta,\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.27)

or

\begin{aligned}\tan (2 \theta) = \frac{2 a b \cos \Delta}{b^2 - a^2} = \frac{2 A B \cos \Delta}{A^2 - B^2}.\end{aligned} \hspace{\stretch{1}}(1.2.27)

This yields 1.1.2 as desired. We also end up with expressions for our major and minor axis lengths, which are respectively for \sin \Delta \ne 0

\begin{aligned}\sin\Delta/ \sqrt{ b^2 + (a^2 - b^2) \cos^2 \theta - a b \sin (2 \theta) \cos \Delta }\end{aligned} \hspace{\stretch{1}}(1.2.27)

\begin{aligned}\sin\Delta/\sqrt{ b^2 + (a^2 - b^2)\sin^2 \theta + a b \sin (2 \theta) \cos \Delta },\end{aligned} \hspace{\stretch{1}}(1.2.27)

which completes the task of determining the geometry of the elliptic parameterization we see results from the general Jones vector description.

References

[1] G.R. Fowles. Introduction to modern optics. Dover Pubns, 1989.

[2] E. Hecht. Optics. 1998.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Complex form of Poynting relationship

Posted by peeterjoot on August 2, 2012

[Click here for a PDF of this post with nicer formatting]

This is a problem from [1], something that I’d tried back when reading [2] but in a way that involved Geometric Algebra and the covariant representation of the energy momentum tensor. Let’s try this with plain old complex vector algebra instead.

Question: Average Poynting flux for complex 2D fields (problem 2.4)

Given a complex field phasor representation of the form

\begin{aligned}\tilde{\mathbf{E}} = \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.0.1)

\begin{aligned}\tilde{\mathbf{H}} = \mathbf{H}_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Here we allow the components of \mathbf{E}_0 and \mathbf{H}_0 to be complex. As usual our fields are defined as the real parts of the phasors

\begin{aligned}\mathbf{E} = \text{Real}( \tilde{\mathbf{E}} )\end{aligned} \hspace{\stretch{1}}(1.0.3)

\begin{aligned}\mathbf{H} = \text{Real}( \tilde{\mathbf{H}} ).\end{aligned} \hspace{\stretch{1}}(1.0.4)

Show that the average Poynting vector has the value

\begin{aligned}\left\langle{{ \mathbf{S} }}\right\rangle = \left\langle{{ \mathbf{E} \times \mathbf{H} }}\right\rangle = \frac{1}{{2}} \text{Real}( \mathbf{E}_0 \times \mathbf{H}_0^{*} ).\end{aligned} \hspace{\stretch{1}}(1.0.5)

Answer

While the text works with two dimensional quantities in the x,y plane, I found this problem easier when tackled in three dimensions. Suppose we write the complex phasor components as

\begin{aligned}\mathbf{E}_0 = \sum_k (\mathbf{E}_{kr} + i \mathbf{E}_{ki}) \mathbf{e}_k = \sum_k {\left\lvert{\mathbf{E}_k}\right\rvert} e^{i \phi_k} \mathbf{e}_k\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}\mathbf{H}_0 = \sum_k (\mathbf{H}_{kr} + i \mathbf{H}_{ki}) \mathbf{e}_k = \sum_k {\left\lvert{\mathbf{H}_k}\right\rvert} e^{i \psi_k} \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(1.0.7)

and also write \phi_k' = \phi_k + \mathbf{k} \cdot \mathbf{x}, and \psi_k' = \psi_k + \mathbf{k} \cdot \mathbf{x}, then our (real) fields are

\begin{aligned}\mathbf{E} = \sum_k {\left\lvert{\mathbf{E}_k}\right\rvert} \cos(\phi_k' - \omega t) \mathbf{e}_k\end{aligned} \hspace{\stretch{1}}(1.0.8)

\begin{aligned}\mathbf{H} = \sum_k {\left\lvert{\mathbf{H}_k}\right\rvert} \cos(\psi_k' - \omega t) \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(1.0.9)

and our Poynting vector before averaging (in these units) is

\begin{aligned}\mathbf{E} \times \mathbf{H} = \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos(\phi_k' - \omega t) \cos(\psi_l' - \omega t) \epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We are tasked with computing the average of cosines

\begin{aligned}\left\langle{{ \cos(a - \omega t) \cos(b - \omega t) }}\right\rangle=\frac{1}{{T}} \int_0^T \cos(a - \omega t) \cos(b - \omega t) dt=\frac{1}{{\omega T}} \int_0^T \cos(a - \omega t) \cos(b - \omega t) \omega dt=\frac{1}{{2 \pi}} \int_0^{2 \pi}\cos(a - u) \cos(b - u) du=\frac{1}{{4 \pi}} \int_0^{2 \pi}\cos(a + b - 2 u) + \cos(a - b) du=\frac{1}{{2}} \cos(a - b).\end{aligned} \hspace{\stretch{1}}(1.0.11)

So, our average Poynting vector is

\begin{aligned}\left\langle{{\mathbf{E} \times \mathbf{H}}}\right\rangle = \frac{1}{{2}} \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos(\phi_k - \psi_l) \epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.12)

We have only to compare this to the desired expression

\begin{aligned}\frac{1}{{2}} \text{Real}( \mathbf{E}_0 \times \mathbf{H}_0^{*} )= \frac{1}{{2}} \sum_{klm} \text{Real}\left({\left\lvert{\mathbf{E}_k}\right\rvert} e^{i\phi_k}{\left\lvert{\mathbf{H}_l}\right\rvert} e^{-i\psi_l}\right)\epsilon_{klm} \mathbf{e}_m = \frac{1}{{2}} \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos( \phi_k - \psi_l )\epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.13)

This proves the desired result.

References

[1] G.R. Fowles. Introduction to modern optics. Dover Pubns, 1989.

[2] JD Jackson. Classical Electrodynamics Wiley. John Wiley and Sons, 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , | 2 Comments »

Continuum mechanics elasticity review.

Posted by peeterjoot on April 23, 2012

[Click here for a PDF of this post with nicer formatting]

Motivation.

Review of key ideas and equations from the theory of elasticity portion of the class.

Strain Tensor

Identifying a point in a solid with coordinates x_i and the coordinates of that portion of the solid after displacement, we formed the difference as a measure of the displacement

\begin{aligned}u_i = x_i' - x_i.\end{aligned} \hspace{\stretch{1}}(2.1)

With du_i = {\partial {u_i}}/{\partial {x_j}} dx_j, we computed the difference in length (squared) for an element of the displaced solid and found

\begin{aligned}dx_k' dx_k' - dx_k dx_k = \left( \frac{\partial {u_j}}{\partial {x_i}} + \frac{\partial {u_i}}{\partial {x_j}} + \frac{\partial {u_k}}{\partial {x_i}} \frac{\partial {u_k}}{\partial {x_j}} \right) dx_i dx_j,\end{aligned} \hspace{\stretch{1}}(2.2)

or defining the \textit{strain tensor} e_{ij}, we have

\begin{aligned}(d\mathbf{x}')^2 - (d\mathbf{x})^2= 2 e_{ij} dx_i dx_j\end{aligned} \hspace{\stretch{1}}(2.3a)

\begin{aligned}e_{ij}=\frac{1}{{2}}\left( \frac{\partial {u_j}}{\partial {x_i}} + \frac{\partial {u_i}}{\partial {x_j}} + \frac{\partial {u_k}}{\partial {x_i}} \frac{\partial {u_k}}{\partial {x_j}} \right).\end{aligned} \hspace{\stretch{1}}(2.3b)

In this course we use only the linear terms and write

\begin{aligned}e_{ij}=\frac{1}{{2}}\left( \frac{\partial {u_j}}{\partial {x_i}} + \frac{\partial {u_i}}{\partial {x_j}} \right).\end{aligned} \hspace{\stretch{1}}(2.4)

Unresolved: Relating displacement and position by strain

In [1] it is pointed out that this strain tensor simply relates the displacement vector coordinates u_i to the coordinates at the point at which it is measured

\begin{aligned}u_i = e_{ij} x_j.\end{aligned} \hspace{\stretch{1}}(2.5)

When we get to fluid dynamics we perform a linear expansion of du_i and find something similar

\begin{aligned}dx_i' - dx_i = du_i = \frac{\partial {u_i}}{\partial {x_k}} dx_k = e_{ij} dx_k + \omega_{ij} dx_k\end{aligned} \hspace{\stretch{1}}(2.6)

where

\begin{aligned}\omega_{ij} = \frac{1}{{2}} \left( \frac{\partial {u_j}}{\partial {x_i}} +\frac{\partial {u_i}}{\partial {x_j}} \right).\end{aligned} \hspace{\stretch{1}}(2.7)

Except for the antisymmetric term, note the structural similarity of 2.5 and 2.6. Why is it that we neglect the vorticity tensor in statics?

Diagonal strain representation.

In a basis for which the strain tensor is diagonal, it was pointed out that we can write our difference in squared displacement as (for k = 1, 2, 3, no summation convention)

\begin{aligned}(dx_k')^2 - (dx_k)^2 = 2 e_{kk} dx_k dx_k\end{aligned} \hspace{\stretch{1}}(2.8)

from which we can rearrange, take roots, and apply a first order Taylor expansion to find (again no summation convention)

\begin{aligned}dx_k' \approx (1 + e_{kk}) dx_k.\end{aligned} \hspace{\stretch{1}}(2.9)

An approximation of the displaced volume was then found in terms of the strain tensor trace (summation convention back again)

\begin{aligned}dV' \approx (1 + e_{kk}) dV,\end{aligned} \hspace{\stretch{1}}(2.10)

allowing us to identify this trace as a relative difference in displaced volume

\begin{aligned}e_{kk} \approx \frac{dV' - dV}{dV}.\end{aligned} \hspace{\stretch{1}}(2.11)

Strain in cylindrical coordinates.

Useful in many practice problems are the cylindrical coordinate representation of the strain tensor

\begin{aligned}2 e_{rr} &= \frac{\partial {u_r}}{\partial {r}}  \\ 2 e_{\phi\phi} &= \frac{1}{{r}} \frac{\partial {u_\phi}}{\partial {\phi}} +\frac{1}{{r}} u_r  \\ 2 e_{zz} &= \frac{\partial {u_z}}{\partial {z}}  \\ 2 e_{zr} &= \frac{\partial {u_r}}{\partial {z}} + \frac{\partial {u_z}}{\partial {r}} \\ 2 e_{r\phi} &= \frac{\partial {u_\phi}}{\partial {r}} - \frac{1}{{r}} u_\phi + \frac{1}{{r}} \frac{\partial {u_r}}{\partial {\phi}} \\ 2 e_{\phi z} &= \frac{\partial {u_\phi}}{\partial {z}} +\frac{1}{{r}} \frac{\partial {u_z}}{\partial {\phi}}.\end{aligned} \hspace{\stretch{1}}(2.12)

This can be found in [2]. It was not derived there or in class, but is not too hard, even using the second order methods we used for the Cartesian form of the tensor.

An easier way to do this derivation (and understand what the coordinates represent) follows from the relation found in section 6 of [3]

\begin{aligned}2 \mathbf{e}_i e_{ij} n_j = 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u} + \hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u}),\end{aligned} \hspace{\stretch{1}}(2.18)

where \hat{\mathbf{n}} is the normal to the surface at which we are measuring a force applied to the solid (our Cauchy tetrahedron).

The cylindrical tensor coordinates of 2.12 follow from
2.18 nicely taking \hat{\mathbf{n}} = \hat{\mathbf{r}}, \hat{\boldsymbol{\phi}}, \hat{\mathbf{z}} in turn.

Compatibility condition.

For a 2D strain tensor we found an interrelationship between the components of the strain tensor

\begin{aligned}2 \frac{\partial^2 e_{12}}{\partial x_1 \partial x_2} =\frac{\partial^2 {{e_{22}}}}{\partial {{x_1}}^2} +\frac{\partial^2 {{e_{11}}}}{\partial {{x_2}}^2},\end{aligned} \hspace{\stretch{1}}(2.19)

and called this the compatibility condition. It was claimed, but not demonstrated that this is what is required to ensure a deformation maintained a coherent solid geometry.

I wasn’t able to find any references to this compatibility condition in any of the texts I have, but found [4], [5], and [6]. It’s not terribly surprising to see Christoffel symbol and differential forms references on those pages, since one can imagine that we’d wish to look at the mappings of all the points in the object as it undergoes the transformation from the original to the deformed state.

Even with just three points in a plane, say \mathbf{a}, \mathbf{b}, \mathbf{c}, the general deformation of an object doesn’t seem like it’s the easiest thing to describe. We can imagine that these have trajectories in the deformation process \mathbf{a} = \mathbf{a}(\alpha, \mathbf{b} = \mathbf{b}(\beta), \mathbf{c} = \mathbf{c}(\gamma), with \mathbf{a}', \mathbf{b}', \mathbf{c}' at the end points of the trajectories. We’d want to look at displacement vectors \mathbf{u}_a, \mathbf{u}_b, \mathbf{u}_c along each of these trajectories, and then see how they must be related. Doing that carefully must result in this compatibility condition.

Stress tensor.

By sought and found a representation of the force per unit area acting on a body by expressing the components of that force as a set of divergence relations

\begin{aligned}f_i = \partial_k \sigma_{i k},\end{aligned} \hspace{\stretch{1}}(3.20)

and call the associated tensor \sigma_{ij} the \textit{stress}.

Unlike the strain, we don’t have any expectation that this tensor is symmetric, and identify the diagonal components (no sum) \sigma_{i i} as quantifying the amount of compressive or contractive force per unit area, whereas the cross terms of the stress tensor introduce shearing deformations in the solid.

With force balance arguments (the Cauchy tetrahedron) we found that the force per unit area on the solid, for a surface with unit normal pointing into the solid, was

\begin{aligned}\mathbf{t} = \mathbf{e}_i t_i = \mathbf{e}_i \sigma_{ij} n_j.\end{aligned} \hspace{\stretch{1}}(3.21)

Constitutive relation.

In the scope of this course we considered only Newtonian materials, those for which the stress and strain tensors are linearly related

\begin{aligned}\sigma_{ij} = c_{ijkl} e_{kl},\end{aligned} \hspace{\stretch{1}}(3.22)

and further restricted our attention to isotropic materials, which can be shown to have the form

\begin{aligned}\sigma_{ij} = \lambda e_{kk} \delta_{ij} + 2 \mu e_{ij},\end{aligned} \hspace{\stretch{1}}(3.23)

where \lambda and \mu are the Lame parameters and \mu is called the shear modulus (and viscosity in the context of fluids).

By computing the trace of the stress \sigma_{ii} we can invert this to find

\begin{aligned}2 \mu e_{ij} = \sigma_{ij} - \frac{\lambda}{3 \lambda + 2 \mu} \sigma_{kk} \delta_{ij}.\end{aligned} \hspace{\stretch{1}}(3.24)

Uniform hydrostatic compression.

With only normal components of the stress (no shear), and the stress having the same value in all directions, we find

\begin{aligned}\sigma_{ij} = ( 3 \lambda + 2 \mu ) e_{ij},\end{aligned} \hspace{\stretch{1}}(3.25)

and identify this combination -3 \lambda - 2 \mu as the pressure, linearly relating the stress and strain tensors

\begin{aligned}\sigma_{ij} = -p e_{ij}.\end{aligned} \hspace{\stretch{1}}(3.26)

With e_{ii} = (dV' - dV)/dV = \Delta V/V, we formed the Bulk modulus K with the value

\begin{aligned}K = \left( \lambda + \frac{2 \mu}{3} \right) = -\frac{p V}{\Delta V}.\end{aligned} \hspace{\stretch{1}}(3.27)

Uniaxial stress. Young’s modulus. Poisson’s ratio.

For the special case with only one non-zero stress component (we used \sigma_{11}) we were able to compute Young’s modulus E, the ratio between stress and strain in that direction

\begin{aligned}E = \frac{\sigma_{11}}{e_{11}} = \frac{\mu(3 \lambda + 2 \mu)}{\lambda + \mu }  = \frac{3 K \mu}{K + \mu/3}.\end{aligned} \hspace{\stretch{1}}(3.28)

Just because only one component of the stress is non-zero, does not mean that we have no deformation in any other directions. Introducing Poisson’s ratio \nu in terms of the ratio of the strains relative to the strain in the direction of the force we write and then subsequently found

\begin{aligned}\nu = -\frac{e_{22}}{e_{11}} = -\frac{e_{33}}{e_{11}} = \frac{\lambda}{2(\lambda + \mu)}.\end{aligned} \hspace{\stretch{1}}(3.29)

We were also able to find

We can also relate the Poisson’s ratio \nu to the shear modulus \mu

\begin{aligned}\mu = \frac{E}{2(1 + \nu)}\end{aligned} \hspace{\stretch{1}}(3.30)

\begin{aligned}\lambda = \frac{E \nu}{(1 - 2 \nu)(1 + \nu)}\end{aligned} \hspace{\stretch{1}}(3.31)

\begin{aligned}e_{11} &= \frac{1}{{E}}\left( \sigma_{11} - \nu(\sigma_{22} + \sigma_{33}) \right) \\ e_{22} &= \frac{1}{{E}}\left( \sigma_{22} - \nu(\sigma_{11} + \sigma_{33}) \right) \\ e_{33} &= \frac{1}{{E}}\left( \sigma_{33} - \nu(\sigma_{11} + \sigma_{22}) \right)\end{aligned} \hspace{\stretch{1}}(3.32)

Displacement propagation

It was argued that the equation relating the time evolution of a one of the vector displacement coordinates was given by

\begin{aligned}\rho \frac{\partial^2 {{u_i}}}{\partial {{t}}^2} = \frac{\partial {\sigma_{ij}}}{\partial {x_j}} + f_i,\end{aligned} \hspace{\stretch{1}}(4.35)

where the divergence term {\partial {\sigma_{ij}}}/{\partial {x_j}} is the internal force per unit volume on the object and f_i is the external force. Employing the constitutive relation we showed that this can be expanded as

\begin{aligned}\rho \frac{\partial^2 {{u_i}}}{\partial {{t}}^2} = (\lambda + \mu) \frac{\partial^2 u_k}{\partial x_i \partial x_k}+ \mu\frac{\partial^2 u_i}{\partial x_j^2},\end{aligned} \hspace{\stretch{1}}(4.36)

or in vector form

\begin{aligned}\rho \frac{\partial^2 {\mathbf{u}}}{\partial {{t}}^2} = (\lambda + \mu) \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{u}) + \mu \boldsymbol{\nabla}^2 \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(4.37)

P-waves

Operating on 4.37 with the divergence operator, and writing \Theta = \boldsymbol{\nabla} \cdot \mathbf{u}, a quantity that was our relative change in volume in the diagonal strain basis, we were able to find this divergence obeys a wave equation

\begin{aligned}\frac{\partial^2 {{\Theta}}}{\partial {{t}}^2} = \frac{\lambda + 2 \mu}{\rho} \boldsymbol{\nabla}^2 \Theta.\end{aligned} \hspace{\stretch{1}}(4.38)

We called these P-waves.

S-waves

Similarly, operating on 4.37 with the curl operator, and writing \boldsymbol{\omega} = \boldsymbol{\nabla} \times \mathbf{u}, we were able to find this curl also obeys a wave equation

\begin{aligned}\rho \frac{\partial^2 {{\boldsymbol{\omega}}}}{\partial {{t}}^2} = \mu \boldsymbol{\nabla}^2 \boldsymbol{\omega}.\end{aligned} \hspace{\stretch{1}}(4.39)

These we called S-waves. We also noted that the (transverse) compression waves (P-waves) with speed C_T = \sqrt{\mu/\rho}, traveled faster than the (longitudinal) vorticity (S) waves with speed C_L = \sqrt{(\lambda + 2 \mu)/\rho} since \lambda > 0 and \mu > 0, and

\begin{aligned}\frac{C_L}{C_T} = \sqrt{\frac{ \lambda + 2 \mu}{\mu}} = \sqrt{ \frac{\lambda}{\mu} + 2}.\end{aligned} \hspace{\stretch{1}}(4.40)

Scalar and vector potential representation.

Assuming a vector displacement representation with gradient and curl components

\begin{aligned}\mathbf{u} = \boldsymbol{\nabla} \phi + \boldsymbol{\nabla} \times \mathbf{H},\end{aligned} \hspace{\stretch{1}}(4.41)

We found that the displacement time evolution equation split nicely into curl free and divergence free terms

\begin{aligned}\boldsymbol{\nabla}\left(\rho \frac{\partial^2 {{\phi}}}{\partial {{t}}^2} - (\lambda + 2\mu) \boldsymbol{\nabla}^2 \phi\right)+\boldsymbol{\nabla} \times\left(\rho \frac{\partial^2 {\mathbf{H}}}{\partial {{t}}^2} - \mu \boldsymbol{\nabla}^2 \mathbf{H}\right)= 0.\end{aligned} \hspace{\stretch{1}}(4.42)

When neglecting boundary value effects this could be written as a pair of independent equations

\begin{aligned}\rho \frac{\partial^2 {{\phi}}}{\partial {{t}}^2} - (\lambda + 2\mu) \boldsymbol{\nabla}^2 \phi = 0\end{aligned} \hspace{\stretch{1}}(4.43a)

\begin{aligned}\rho \frac{\partial^2 {\mathbf{H}}}{\partial {{t}}^2} - \mu \boldsymbol{\nabla}^2 \mathbf{H}= 0.\end{aligned} \hspace{\stretch{1}}(4.43b)

This are the irrotational (curl free) P-wave and solenoidal (divergence free) S-wave equations respectively.

Phasor description.

It was mentioned that we could assume a phasor representation for our potentials, writing

\begin{aligned}\phi = A \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \end{aligned} \hspace{\stretch{1}}(4.44a)

\begin{aligned}\mathbf{H} = \mathbf{B} \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right)\end{aligned} \hspace{\stretch{1}}(4.44b)

finding

\begin{aligned}\mathbf{u} = i \mathbf{k} \phi + i \mathbf{k} \times \mathbf{H}.\end{aligned} \hspace{\stretch{1}}(4.45)

We did nothing with neither the potential nor the phasor theory for solid displacement time evolution, and presumably won’t on the exam either.

Some wave types

Some time was spent on non-qualitative descriptions and review of descriptions for solutions to the time evolution equations we did not attempt

  1. P-waves [7]. Irrotational, non volume preserving body wave.
  2. S-waves [8]. Divergence free body wave. Shearing forces are present and volume is preserved (slower than S-waves)
  3. Rayleigh wave [9]. A surface wave that propagates near the surface of a body without penetrating into it.
  4. Love wave [10]. A polarized shear surface wave with the shear displacements moving perpendicular to the direction of propagation.

For reasons that aren’t clear both the midterm and last years final ask us to spew this sort of stuff (instead of actually trying to do something analytic associated with them).

References

[1] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics.[Lectures on physics], chapter Elastic Materials. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963.

[2] L.D. Landau, EM Lifshitz, JB Sykes, WH Reid, and E.H. Dill. Theory of Elasticity: Vol. 7 of Course of Theoretical Physics. 1960.

[3] D.J. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990.

[4] Wikipedia. Compatibility (mechanics) — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 23-April-2012]. http://en.wikipedia.org/w/index.php?title=Compatibility_(mechanics)&oldid=463812965.

[5] Wikipedia. Infinitesimal strain theory — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 23-April-2012]. http://en.wikipedia.org/w/index.php?title=Infinitesimal_strain_theory&oldid=478640283.

[6] Wikipedia. Saint-venant’s compatibility condition — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 23-April-2012]. http://en.wikipedia.org/w/index.php?title=Saint-Venant\%27s_compatibility_condition&oldid=436103127.

[7] Wikipedia. P-wave — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 1-February-2012]. http://en.wikipedia.org/w/index.php?title=P-wave&oldid=474119033.

[8] Wikipedia. S-wave — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 1-February-2012]. http://en.wikipedia.org/w/index.php?title=S-wave&oldid=468110825.

[9] Wikipedia. Rayleigh wave — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 4-February-2012]. http://en.wikipedia.org/w/index.php?title=Rayleigh_wave&oldid=473693354.

[10] Wikipedia. Love wave — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 4-February-2012]. http://en.wikipedia.org/w/index.php?title=Love_wave&oldid=474355253.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

PHY454H1S Continuum Mechanics. Lecture 8: Phasor description of elastic waves. Fluid dynamics. Taught by Prof. K. Das.

Posted by peeterjoot on February 6, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Review. Elastic wave equation

Starting with

\begin{aligned}\rho \frac{\partial^2 {\mathbf{e}}}{\partial {{t}}^2} = (\lambda + \mu) \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{e}) + \mu \boldsymbol{\nabla}^2 \mathbf{e}\end{aligned} \hspace{\stretch{1}}(2.1)

and applying a divergence operation we find

\begin{aligned}\rho \frac{\partial^2 {{\theta}}}{\partial {{t}}^2} &= C_L^2 \boldsymbol{\nabla}^2 \theta \\ \theta &= \boldsymbol{\nabla} \cdot \mathbf{e} \\ C_L^2 &= \frac{\lambda + 2\mu}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.2)

This is the P-wave equation. Applying a curl operation we find

\begin{aligned}\rho \frac{\partial^2 {{\boldsymbol{\omega}}}}{\partial {{t}}^2} &= C_T^2 \boldsymbol{\nabla}^2 \boldsymbol{\omega} \\ \boldsymbol{\omega} &= \boldsymbol{\nabla} \times \mathbf{e} \\ C_T^2 &= \frac{\lambda + 2\mu}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.5)

This is the S-wave equation. We also found that

\begin{aligned}\frac{C_L}{C_T} > 1,\end{aligned} \hspace{\stretch{1}}(2.8)

and concluded that P waves are faster than S waves. What we haven’t shown is that the P waves are longitudinal, and that the S waves are transverse.

Assuming a gradient and curl description of our displacement

\begin{aligned}\mathbf{e} = \boldsymbol{\nabla} \phi + \boldsymbol{\nabla} \times \mathbf{H} = \mathbf{P} + \mathbf{S},\end{aligned} \hspace{\stretch{1}}(2.9)

we found

\begin{aligned}(\lambda + 2 \mu) \boldsymbol{\nabla}^2 \phi - \rho \frac{\partial^2 {{\phi}}}{\partial {{t}}^2} &= 0 \\ \mu \boldsymbol{\nabla}^2 \mathbf{H} - \rho \frac{\partial^2 {\mathbf{H}}}{\partial {{t}}^2} &= 0,\end{aligned} \hspace{\stretch{1}}(2.10)

allowing us to separately solve for the P and the S wave solutions respectively. Now, let’s introduce a phasor representation (again following section 22 of the text [1])

\begin{aligned}\phi &= A \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \\ \mathbf{H} &= \mathbf{B} \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right)\end{aligned} \hspace{\stretch{1}}(2.12)

Operating with the gradient we find

\begin{aligned}\mathbf{P}&= \boldsymbol{\nabla} \phi \\ &= \mathbf{e}_k \partial_k A \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \\ &= \mathbf{e}_k \partial_k A \exp\left( i ( k_m x_m - \omega t) \right) \\ &= \mathbf{e}_k i k_k A \exp\left( i ( k_m x_m - \omega t) \right) \\ &= i \mathbf{k} A \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \\ &= i \mathbf{k} \phi\end{aligned}

We can also write

\begin{aligned}\mathbf{P} = \mathbf{k} \phi'\end{aligned} \hspace{\stretch{1}}(2.14)

where \phi' is the derivative of \phi “with respect to its argument”. Here argument must mean the entire phase \mathbf{k} \cdot \mathbf{x} - \omega t.

\begin{aligned}\phi' = \frac{ d\phi( \mathbf{k} \cdot \mathbf{x} - \omega t )}{ d(\mathbf{k} \cdot \mathbf{x} - \omega t) } = i \phi\end{aligned} \hspace{\stretch{1}}(2.15)

Actually, argument is a good label here, since we can use the word in the complex number sense.

For the curl term we find

\begin{aligned}\mathbf{S}&= \boldsymbol{\nabla} \times \mathbf{H} \\ &= \mathbf{e}_a \partial_b H_c \epsilon_{a b c} \\ &= \mathbf{e}_a \partial_b \epsilon_{a b c} B_c \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \\ &= \mathbf{e}_a \partial_b \epsilon_{a b c} B_c \exp\left( i ( k_m x_m - \omega t) \right) \\ &= \mathbf{e}_a i k_b \epsilon_{a b c} B_c \exp\left( i ( \mathbf{k} \cdot \mathbf{x} - \omega t) \right) \\ &= i \mathbf{k} \times \mathbf{H}\end{aligned}

Again writing

\begin{aligned}\mathbf{H}' = \frac{ d\mathbf{H}( \mathbf{k} \cdot \mathbf{x} - \omega t )}{ d(\mathbf{k} \cdot \mathbf{x} - \omega t) } = i \mathbf{H}\end{aligned} \hspace{\stretch{1}}(2.16)

we can write the S wave as

\begin{aligned}\mathbf{S} = \mathbf{k} \times \mathbf{H}'\end{aligned} \hspace{\stretch{1}}(2.17)

Some waves illustrated.

The following wave types were noted, but not defined:

\begin{itemize}
\item Rayleigh wave. This is discussed in section 24 of the text (a wave that propagates near the surface of a body without penetrating into it). Wikipedia has an illustration of one possible mode of propagation [2].
\item Love wave. These aren’t discussed in the text, but wikipedia [3] describes them as polarized shear waves (where the figure indicates that the shear displacements are perpendicular to the direction of propagation).
\end{itemize}

Some illustrations from the class notes were also shown. Hopefully we’ll have some homework assignments where we do some problems to get a feel for how to apply the formalism.

Fluid dynamics.

In fluid dynamics we look at displacements with respect to time as illustrated in figure (\ref{fig:continuumL8:continuumL8fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{continuumL8fig1}
\caption{Differential displacement.}
\end{figure}

\begin{aligned}d\mathbf{x}' = d\mathbf{x} + d\mathbf{u} \delta t\end{aligned} \hspace{\stretch{1}}(3.18)

In index notation

\begin{aligned}dx_i'&= dx_i + du_i \delta t \\ &= dx_i + \frac{\partial {u_i}}{\partial {x_j}} dx_j \delta t\end{aligned}

We define

\begin{aligned}e_{ij} = \frac{1}{{2}} \left(\frac{\partial {u_i}}{\partial {x_j}} +\frac{\partial {u_j}}{\partial {x_i}} \right)\end{aligned} \hspace{\stretch{1}}(3.19)

a symmetric tensor. We also define

\begin{aligned}\omega_{ij} = \frac{1}{{2}} \left(\frac{\partial {u_i}}{\partial {x_j}}-\frac{\partial {u_j}}{\partial {x_i}} \right)\end{aligned} \hspace{\stretch{1}}(3.20)

Effect of e_{ij} when diagonalized

\begin{aligned}e_{ij} ==\begin{bmatrix}e_{11} & 0 & 0 \\ 0 & e_{22} & 0 \\ 0 & 0 & e_{33}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.21)

so that in this frame of reference we have

\begin{aligned}dx_1' &= ( 1 + e_{11} \delta t) dx_1 \\ dx_2' &= ( 1 + e_{22} \delta t) dx_2 \\ dx_3' &= ( 1 + e_{33} \delta t) dx_3\end{aligned} \hspace{\stretch{1}}(3.22)

Let’s find the matrix form of the antisymmetric tensor. We find

\begin{aligned}\omega_{11} = \omega_{22} = \omega_{33} = 0\end{aligned} \hspace{\stretch{1}}(3.25)

Introducing a vorticity vector

\begin{aligned}\boldsymbol{\omega} = \boldsymbol{\nabla} \times \mathbf{u}\end{aligned} \hspace{\stretch{1}}(3.26)

we find

\begin{aligned}\omega_{12} &= \frac{1}{{2}}\left( \frac{\partial {u_1}}{\partial {x_2}} -\frac{\partial {u_2}}{\partial {x_1}} \right) = - \frac{1}{{2}} (\boldsymbol{\nabla} \times \mathbf{u})_3 \\ \omega_{23} &= \frac{1}{{2}}\left( \frac{\partial {u_2}}{\partial {x_3}} -\frac{\partial {u_3}}{\partial {x_2}} \right) = - \frac{1}{{2}} (\boldsymbol{\nabla} \times \mathbf{u})_1 \\ \omega_{31} &= \frac{1}{{2}}\left( \frac{\partial {u_3}}{\partial {x_1}} -\frac{\partial {u_1}}{\partial {x_3}} \right) = - \frac{1}{{2}} (\boldsymbol{\nabla} \times \mathbf{u})_2\end{aligned} \hspace{\stretch{1}}(3.27)

Writing

\begin{aligned}\Omega_i = \frac{1}{{2}} \omega_i\end{aligned} \hspace{\stretch{1}}(3.30)

we find the matrix form of this antisymmetric tensor

\begin{aligned}\omega_{ij}=\begin{bmatrix}0 & -\Omega_3 & \Omega_2 \\ \Omega_3 & 0 & -\Omega_1 \\ -\Omega_2 & \Omega_1 & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.31)

\begin{aligned}dx_1'&= dx_1 + \left( \not{{\omega_{11}}} dx_1 + \omega_{12} dx_2 + \omega_{13} dx_3 \right) \delta t \\ &= dx_1 + \left( \omega_{12} dx_2 + \omega_{13} dx_3 \right) \delta t \\ &= dx_1 + \left( \Omega_2 dx_3 - \Omega_3 dx_2 \right) \delta t\end{aligned}

Doing this for all components we find

\begin{aligned}d\mathbf{x}' = d\mathbf{x} + (\boldsymbol{\Omega} \times d\mathbf{x}) \delta t.\end{aligned} \hspace{\stretch{1}}(3.32)

The tensor \omega_{ij} implies rotation of a control volume with an angular velocity \boldsymbol{\Omega} = \boldsymbol{\omega}/2 (half the vorticity vector).

In general we have

\begin{aligned}dx_i' = dx_i + e_{ij} dx_j \delta t + \omega_{ij} dx_j \delta t\end{aligned} \hspace{\stretch{1}}(3.33)

Making sense of things.

After this first fluid dynamics lecture I was left troubled. We’d just been barraged with a set of equations pulled out of a magic hat, with no notion of where they came from. Unlike the contiuum strain tensor, which was derived by considering differences in squared displacements, we have an antisymmetric term now. Why did we have no such term considering solids?

After a bit of thought I think I see where things are coming from. We have essentially looked at a first order decomposition of the displacement (per unit time) of a point in terms of symmetric and antisymmetric terms. This is really just a gradient evaluation, split into coordinates

\begin{aligned}x_i' &= x_i + (\boldsymbol{\nabla} u_i) \cdot d\mathbf{x} \delta t \\ &= x_i + \frac{\partial {u_i}}{\partial {x_j}} dx_j \delta t \\ &= x_i + \frac{1}{{2}}\left(\frac{\partial {u_i}}{\partial {x_j}} +\frac{\partial {u_i}}{\partial {x_j}} \right)dx_j \delta t +\frac{1}{{2}}\left(\frac{\partial {u_i}}{\partial {x_j}} -\frac{\partial {u_i}}{\partial {x_j}} \right)dx_j \delta t  \\ &=x_i + e_{ij} dx_j \delta t + \omega_{ij} dx_j \delta t\end{aligned}

Here, as in the solids case, we have

\begin{aligned}\mathbf{u} = \mathbf{x}' - \mathbf{x}\end{aligned} \hspace{\stretch{1}}(3.34)

References

[1] L.D. Landau, EM Lifshitz, JB Sykes, WH Reid, and E.H. Dill. Theory of elasticity: Vol. 7 of course of theoretical physics. 1960.

[2] Wikipedia. Rayleigh wave — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 4-February-2012]. http://en.wikipedia.org/w/index.php?title=Rayleigh_wave&oldid=473693354.

[3] Wikipedia. Love wave — wikipedia, the free encyclopedia [online]. 2012. [Online; accessed 4-February-2012]. http://en.wikipedia.org/w/index.php?title=Love_wave&oldid=474355253.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 4 (TA: Simon Freedman). Waveguides: confined EM waves.

Posted by peeterjoot on March 14, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation

While this isn’t part of the course, the topic of waveguides is one of so many applications that it is worth a mention, and that will be done in this tutorial.

We will setup our system with a waveguide (conducting surface that confines the radiation) oriented in the \hat{\mathbf{z}} direction. The shape can be arbitrary

PICTURE: cross section of wacky shape.

At the surface of a conductor.

At the surface of the conductor (I presume this means the interior surface where there is no charge or current enclosed) we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} &= - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

If we are talking about the exterior surface, do we need to make any other assumptions (perfect conductors, or constant potentials)?

Wave equations.

For electric and magnetic fields in vacuum, we can show easily that these, like the potentials, separately satisfy the wave equation

Taking curls of the Maxwell curl equations above we have

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{E}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{E}}}{\partial {{t}}^2} \\ \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{B}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{B}}}{\partial {{t}}^2},\end{aligned} \hspace{\stretch{1}}(1.5)

but we have for vector \mathbf{M}

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{M})=\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{M}) - \Delta \mathbf{M},\end{aligned} \hspace{\stretch{1}}(1.7)

which gives us a pair of wave equations

\begin{aligned}\square \mathbf{E} &= 0 \\ \square \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(1.8)

We still have the original constraints of Maxwell’s equations to deal with, but we are free now to pick the complex exponentials as fundamental solutions, as our starting point

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{i k^a x_a} = \mathbf{E}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{i k^a x_a} = \mathbf{B}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) },\end{aligned} \hspace{\stretch{1}}(1.10)

With k_0 = \omega/c and x_0 = c t this is

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.12)

For the vacuum case, with monochromatic light, we treated the amplitudes as constants. Let’s see what happens if we relax this assumption, and allow for spatial dependence (but no time dependence) of \mathbf{E}_0 and \mathbf{B}_0. For the LHS of the electric field curl equation we have

\begin{aligned}0 &= \boldsymbol{\nabla} \times \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + \mathbf{E}_0 \times \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + i \mathbf{E}_0 \times \mathbf{k} ) e^{i k_a x^a}.\end{aligned}

Similarly for the divergence we have

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 ) e^{i k_a x^a}.\end{aligned}

This provides constraints on the amplitudes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 - i \mathbf{k} \times \mathbf{E}_0 &= -i \frac{\omega}{c} \mathbf{B}_0 \\ \boldsymbol{\nabla} \times \mathbf{B}_0 - i \mathbf{k} \times \mathbf{B}_0 &= i \frac{\omega}{c} \mathbf{E}_0 \\ \boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B}_0 - i \mathbf{k} \cdot \mathbf{B}_0 &= 0\end{aligned} \hspace{\stretch{1}}(1.14)

Applying the wave equation operator to our phasor we get

\begin{aligned}0 &=\left(\frac{1}{{c^2}} \partial_{tt} - \boldsymbol{\nabla}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})} \\ &=\left(-\frac{\omega^2}{c^2} - \boldsymbol{\nabla}^2 + \mathbf{k}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})}\end{aligned}

So the momentum space equivalents of the wave equations are

\begin{aligned}\left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(1.18)

Observe that if c^2 \mathbf{k}^2 = \omega^2, then these amplitudes are harmonic functions (solutions to the Laplacian equation). However, it doesn’t appear that we require such a light like relation for the four vector k^a = (\omega/c, \mathbf{k}).

Back to the tutorial notes.

In class we went straight to an assumed solution of the form

\begin{aligned}\mathbf{E} &= \mathbf{E}_0(x, y) e^{ i(\omega t - k z) } \\ \mathbf{B} &= \mathbf{B}_0(x, y) e^{ i(\omega t - k z) },\end{aligned} \hspace{\stretch{1}}(2.20)

where \mathbf{k} = k \hat{\mathbf{z}}. Our Laplacian was also written as the sum of components in the propagation and perpendicular directions

\begin{aligned}\boldsymbol{\nabla}^2 = \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} + \frac{\partial^2 {{}}}{\partial {{z}}^2}.\end{aligned} \hspace{\stretch{1}}(2.22)

With no z dependence in the amplitudes we have

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(2.23)

Separation into components.

It was left as an exercise to separate out our Maxwell equations, so that our field components \mathbf{E}_0 = \mathbf{E}_\perp + \mathbf{E}_z and \mathbf{B}_0 = \mathbf{B}_\perp + \mathbf{B}_z in the propagation direction, and components in the perpendicular direction are separated

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 &=(\boldsymbol{\nabla}_\perp + \hat{\mathbf{z}}\partial_z) \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times (\mathbf{E}_\perp + \mathbf{E}_z) \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=( \hat{\mathbf{x}} \partial_x +\hat{\mathbf{y}} \partial_y ) \times ( \hat{\mathbf{x}} E_x +\hat{\mathbf{y}} E_y ) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=\hat{\mathbf{z}} (\partial_x E_y - \partial_z E_z) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z.\end{aligned}

We can do something similar for \mathbf{B}_0. This allows for a split of 1.14 into \hat{\mathbf{z}} and perpendicular components

\begin{aligned}\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{E}_z - i \mathbf{k} \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_z - i \mathbf{k} \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= i k E_z - \partial_z E_z \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i k B_z - \partial_z B_z.\end{aligned} \hspace{\stretch{1}}(3.25)

So we see that once we have a solution for \mathbf{E}_z and \mathbf{B}_z (by solving the wave equation above for those components), the components for the fields in terms of those components can be found. Alternately, if one solves for the perpendicular components of the fields, these propagation components are available immediately with only differentiation.

In the case where the perpendicular components are taken as given

\begin{aligned}\mathbf{B}_z &= i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp \\ \mathbf{E}_z &= -i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.31)

we can express the remaining ones strictly in terms of the perpendicular fields

\begin{aligned}\frac{\omega}{c} \mathbf{B}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) + \mathbf{k} \times \mathbf{E}_\perp \\ \frac{\omega}{c} \mathbf{E}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) - \mathbf{k} \times \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= -i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp).\end{aligned} \hspace{\stretch{1}}(3.33)

Is it at all helpful to expand the double cross products?

\begin{aligned}\frac{\omega^2}{c^2} \mathbf{B}_\perp &= \boldsymbol{\nabla}_\perp (\boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp) -{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ &= i \frac{c}{\omega}(i k - \partial_z)\boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp)-{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \end{aligned}

This gives us

\begin{aligned}\left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{B}_\perp &= - \frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ \left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{E}_\perp &= -\frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) - \frac{\omega}{c} \mathbf{k} \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.37)

but that doesn’t seem particularly useful for completely solving the system? It appears fairly messy to try to solve for \mathbf{E}_\perp and \mathbf{B}_\perp given the propagation direction fields. I wonder if there is a simplification available that I am missing?

Solving the momentum space wave equations.

Back to the class notes. We proceeded to solve for \mathbf{E}_z and \mathbf{B}_z from the wave equations by separation of variables. We wish to solve equations of the form

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x}}^2} + \frac{\partial^2 {{}}}{\partial {{y}}^2} + \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \phi(x,y) = 0\end{aligned} \hspace{\stretch{1}}(4.39)

Write \phi(x,y) = X(x) Y(y), so that we have

\begin{aligned}\frac{X''}{X} + \frac{Y''}{Y} = \mathbf{k}^2 - \frac{\omega^2}{c^2}\end{aligned} \hspace{\stretch{1}}(4.40)

One solution is sinusoidal

\begin{aligned}\frac{X''}{X} &= -k_1^2 \\ \frac{Y''}{Y} &= -k_2^2 \\ -k_1^2 - k_2^2&= \mathbf{k}^2 - \frac{\omega^2}{c^2}.\end{aligned} \hspace{\stretch{1}}(4.41)

The example in the tutorial now switched to a rectangular waveguide, still oriented with the propagation direction down the z-axis, but with lengths a and b along the x and y axis respectively.

Writing k_1 = 2\pi m/a, and k_2 = 2 \pi n/ b, we have

\begin{aligned}\phi(x, y) = \sum_{m n} a_{m n} \exp\left( \frac{2 \pi i m}{a} x \right)\exp\left( \frac{2 \pi i n}{b} y \right)\end{aligned} \hspace{\stretch{1}}(4.44)

We were also provided with some definitions

\begin{definition}TE (Transverse Electric)

\mathbf{E}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Magnetic)

\mathbf{B}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Electromagnetic)

\mathbf{E}_3 = \mathbf{B}_3 = 0.
\end{definition}

\begin{claim}TEM do not existing in a hollow waveguide.
\end{claim}

Why: I had in my notes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = 0 & \implies \frac{\partial {E_2}}{\partial {x^1}} -\frac{\partial {E_1}}{\partial {x^2}} = 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} = 0 & \implies \frac{\partial {E_1}}{\partial {x^1}} +\frac{\partial {E_2}}{\partial {x^2}} = 0\end{aligned}

and then

\begin{aligned}\boldsymbol{\nabla}^2 \phi &= 0 \\ \phi &= \text{const}\end{aligned}

In retrospect I fail to see how these are connected? What happened to the \partial_t \mathbf{B} term in the curl equation above?

It was argued that we have \mathbf{E}_\parallel = \mathbf{B}_\perp = 0 on the boundary.

So for the TE case, where \mathbf{E}_3 = 0, we have from the separation of variables argument

\begin{aligned}\hat{\mathbf{z}} \cdot \mathbf{B}_0(x, y) =\sum_{m n} a_{m n} \cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.45)

No sines because

\begin{aligned}B_1 \propto \frac{\partial {B_3}}{\partial {x_a}} \rightarrow \cos(k_1 x^1).\end{aligned} \hspace{\stretch{1}}(4.46)

The quantity

\begin{aligned}a_{m n}\cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.47)

is called the TE_{m n} mode. Note that since B = \text{const} an ampere loop requires \mathbf{B} = 0 since there is no current.

Writing

\begin{aligned}k &= \frac{\omega}{c} \sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 } \\ \omega_{m n} &= 2 \pi c \sqrt{ \left(\frac{m}{a} \right)^2 + \left(\frac{n}{b} \right)^2 }\end{aligned} \hspace{\stretch{1}}(4.48)

Since \omega < \omega_{m n} we have k purely imaginary, and the term

\begin{aligned}e^{-i k z} = e^{- {\left\lvert{k}\right\rvert} z}\end{aligned} \hspace{\stretch{1}}(4.50)

represents the die off.

\omega_{10} is the smallest.

Note that the convention is that the m in TE_{m n} is the bigger of the two indexes, so \omega > \omega_{10}.

The phase velocity

\begin{aligned}V_\phi = \frac{\omega}{k} = \frac{c}{\sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 }} \ge c\end{aligned} \hspace{\stretch{1}}(4.51)

However, energy is transmitted with the group velocity, the ratio of the Poynting vector and energy density

\begin{aligned}\frac{\left\langle{\mathbf{S}}\right\rangle}{\left\langle{{U}}\right\rangle} = V_g = \frac{\partial {\omega}}{\partial {k}} = 1/\frac{\partial {k}}{\partial {\omega}}\end{aligned} \hspace{\stretch{1}}(4.52)

(This can be shown).

Since

\begin{aligned}\left(\frac{\partial {k}}{\partial {\omega}}\right)^{-1} = \left(\frac{\partial {}}{\partial {\omega}}\sqrt{ (\omega/c)^2 - (\omega_{m n}/c)^2 }\right)^{-1} = c \sqrt{ 1 - (\omega_{m n}/\omega)^2 } \le c\end{aligned} \hspace{\stretch{1}}(4.53)

We see that the energy is transmitted at less than the speed of light as expected.

Final remarks.

I’d started converting my handwritten scrawl for this tutorial into an attempt at working through these ideas with enough detail that they self contained, but gave up part way. This appears to me to be too big of a sub-discipline to give it justice in one hours class. As is, it is enough to at least get an concept of some of the ideas involved. I think were I to learn this for real, I’d need a good text as a reference (or the time to attempt to blunder through the ideas in much much more detail).

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

Comparing phasor and geometric transverse solutions to the Maxwell equation (continued)

Posted by peeterjoot on August 9, 2009

Continuing with previous post.

[Click here for a PDF of this post with nicer formatting]

Explicit split of geometric phasor into advanced and receding parts

For a more general split of the geometric phasor into advanced and receding wave terms, will there be interdependence between the electric and magnetic field components? Going back to (16), and rearranging, we have

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=(C_{-} -I S_{-})+\hat{\mathbf{k}} (C_{-} -I S_{-} )+(C_{+} +I S_{+})-\hat{\mathbf{k}} (C_{+} +I S_{+}) \\  \end{aligned}

So we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}(1 + \hat{\mathbf{k}})e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})}+\frac{1}{{2}}(1 - \hat{\mathbf{k}})e^{I(\omega t + \mathbf{k} \cdot \mathbf{x})} \end{aligned} \quad\quad\quad(19)

As observed if we have \hat{\mathbf{k}} \mathcal{F} = \mathcal{F}, the result is only the advanced wave term

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} = e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Similarly, with absorption of \hat{\mathbf{k}} with the opposing sign \hat{\mathbf{k}} \mathcal{F} = -\mathcal{F}, we have only the receding wave

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} = e^{I(\omega t + \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Either of the receding or advancing wave solutions should independently satisfy the Maxwell equation operator. Let’s verify both of these, and verify that for either the \pm cases the following is a solution and examine the constraints for that to be the case.

\begin{aligned}F = \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned} \quad\quad\quad(20)

Now we wish to apply the Maxwell equation operator \boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0 to this assumed solution. That is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= \sigma_m \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) (\pm I \pm k^m) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F}+ \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) (\pm I \sqrt{\mu\epsilon}\omega/c) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \\ &= \frac{\pm I}{2}\left(\pm \hat{\mathbf{k}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right)(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

By left multiplication with the conjugate of the Maxwell operator \nabla - \sqrt{\mu\epsilon}\partial_0 we have the wave equation operator, and applying that, we have as before, a magnitude constraint on the wave number \mathbf{k}

\begin{aligned}0 &= (\boldsymbol{\nabla} - \sqrt{\mu\epsilon}\partial_0) (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= (\boldsymbol{\nabla}^2 - {\mu\epsilon}\partial_{00}) F \\ &= \frac{-1}{2}(1 \pm \hat{\mathbf{k}}) \left( \mathbf{k}^2 - \mu\epsilon\frac{\omega^2}{c^2}\right) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

So we have as before {\left\lvert{\mathbf{k}}\right\rvert} = \sqrt{\mu\epsilon}\omega/c. Substitution into the the first order operator result we have

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= \frac{\pm I}{2}\sqrt{\mu\epsilon}\frac{\omega}{c}\left(\pm \hat{\mathbf{k}} + 1\right)(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Observe that the multivector 1 \pm \hat{\mathbf{k}}, when squared is just a multiple of itself

\begin{aligned}(1 \pm \hat{\mathbf{k}})^2 = 1 + \hat{\mathbf{k}}^2 \pm 2 \hat{\mathbf{k}} = 2 (1 \pm \hat{\mathbf{k}}) \end{aligned}

So we have

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= {\pm I}\sqrt{\mu\epsilon}\frac{\omega}{c}(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

So we see that the constraint again on the individual assumed solutions is again that of absorption. Separately the advanced or receding parts of the geometric phasor as expressed in (20) are solutions provided

\begin{aligned}\hat{\mathbf{k}} F = \mp F \end{aligned} \quad\quad\quad(21)

The geometric phasor is seen to be curious superposition of both advancing and receding states. Independently we have something pretty much like the standard transverse phasor wave states. Is this superposition state physically meaningful. It is a solution to the Maxwell equation (without any constraints on \boldsymbol{\mathcal{E}} and \boldsymbol{\mathcal{B}}).

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

Comparing phasor and geometric transverse solutions to the Maxwell equation

Posted by peeterjoot on August 8, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation

In ([1]) a phasor like form of the transverse wave equation was found by considering Fourier solutions of the Maxwell equation. This will be called the “geometric phasor” since it is hard to refer and compare it without giving it a name. Curiously no perpendicularity condition for \mathbf{E} and \mathbf{B} seemed to be required for this geometric phasor. Why would that be the case? In Jackson’s treatment, which employed the traditional dot and cross product form of Maxwell’s equations, this followed by back substituting the assumed phasor solution back into the equations. This back substitution wasn’t done in ([1]). If we attempt this we should find the same sort of additional mutual perpendicularity constraints on the fields.

Here we start with the equations from Jackson ([2], ch7), expressed in GA form. Using the same assumed phasor form we should get the same results using GA. Anything else indicates a misunderstanding or mistake, so as an intermediate step we should at least recover the Jackson result.

After using a more traditional phasor form (where one would have to take real parts) we revisit the goemetric phasor found in ([1]). It will be found that the perpendicular constraints of the Jackson phasor solution lead to a representation where the geometric phasor is reduced to the Jackson form with a straight substitution of the imaginary i with the pseudoscalar I = \sigma_1\sigma_2\sigma_3. This representation however, like the more general geometric phasor requires no selection of real or imaginary parts to construct a “physical” solution.

With assumed phasor field

Maxwell’s equations in absence of charge and current ((7.1) of Jackson) can be summarized by

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F  \end{aligned} \quad\quad\quad(1)

The F above is a composite electric and magnetic field merged into a single multivector. In the spatial basic the electric field component \mathbf{E} is a vector, and the magnetic component I\mathbf{B} is a bivector (in the Dirac basis both are bivectors).

\begin{aligned}F &= \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon} \end{aligned} \quad\quad\quad(2)

With an assumed phasor form

\begin{aligned}F = \mathcal{F} e^{ i(\mathbf{k} \cdot \mathbf{x} - \omega t) } = (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) e^{ i(\mathbf{k} \cdot \mathbf{x} - \omega t) } \end{aligned} \quad\quad\quad(3)

Although there are many geometric multivectors that square to -1, we do not assume here that the imaginary i has any specific geometric meaning, and in fact commutes with all multivectors. Because of this we have to take the real parts later when done.

Operating on F with Maxwell’s equation we have

\begin{aligned}0 = (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F = i \left( \mathbf{k} - \sqrt{\mu\epsilon}\frac{\omega}{c} \right) F  \end{aligned} \quad\quad\quad(4)

Similarly, left multiplication of Maxwell’s equation by the conjugate operator \boldsymbol{\nabla} - \sqrt{\mu\epsilon}\partial_0, we have the wave equation

\begin{aligned}0 &= \left(\boldsymbol{\nabla}^2 - \frac{\mu\epsilon}{c^2}\frac{\partial^2}{\partial t^2}\right) F  \end{aligned} \quad\quad\quad(5)

and substitution of the assumed phasor solution gives us

\begin{aligned}0 = (\boldsymbol{\nabla}^2 - {\mu\epsilon}\partial_{00}) F = -\left( \mathbf{k}^2 - {\mu\epsilon}\frac{\omega^2}{c^2} \right) F  \end{aligned} \quad\quad\quad(6)

This provides the relation between the magnitude of \mathbf{k} and \omega, namely

\begin{aligned}{\left\lvert{\mathbf{k}}\right\rvert} = \pm \sqrt{\mu\epsilon}\frac{\omega}{c}  \end{aligned} \quad\quad\quad(7)

Without any real loss of generality we can pick the positive root, so the result of the Maxwell equation operator on the phasor is

\begin{aligned}0 = (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F = i \sqrt{\mu\epsilon}\frac{\omega}{c} \left( \hat{\mathbf{k}} - 1\right) F  \end{aligned} \quad\quad\quad(8)

Rearranging we have the curious property that the field F can “swallow” a left multiplication by the propagation direction unit vector

\begin{aligned}\hat{\mathbf{k}} F = F  \end{aligned} \quad\quad\quad(9)

Selection of the scalar and pseudoscalar grades of this equation shows that the electric and magnetic fields \mathbf{E} and \mathbf{B} are both completely transverse to the propagation direction \hat{\mathbf{k}}. For the scalar grades we have

\begin{aligned}0 &= \left\langle{{\hat{\mathbf{k}} F - F}}\right\rangle \\   &= \hat{\mathbf{k}} \cdot \mathbf{E} \end{aligned}

and for the pseudoscalar

\begin{aligned}0 &= {\left\langle{{\hat{\mathbf{k}} F - F}}\right\rangle}_{3} \\   &= I \hat{\mathbf{k}} \cdot \mathbf{B} \end{aligned}

From this we have \hat{\mathbf{k}} \cdot \mathbf{B} = \hat{\mathbf{k}} \cdot \mathbf{B} = 0. Because of this transverse property we see that the \hat{\mathbf{k}} multiplication of F in (9) serves to map electric field (vector) components into bivectors, and the magnetic bivector components into vectors. For the result to be the same means we must have an additional coupling between the field components. Writing out (9) in terms of the field components we have

\begin{aligned}\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} &= \hat{\mathbf{k}} (\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} ) \\ &= \hat{\mathbf{k}} \wedge \mathbf{E} + I (\hat{\mathbf{k}} \wedge \mathbf{B})/\sqrt{\mu\epsilon}  \\ &= I \hat{\mathbf{k}} \times \mathbf{E} + I^2 (\hat{\mathbf{k}} \times \mathbf{B})/\sqrt{\mu\epsilon}  \end{aligned}

Equating left and right hand grades we have

\begin{aligned}\mathbf{E} &= -(\hat{\mathbf{k}} \times \mathbf{B})/\sqrt{\mu\epsilon} \\ \mathbf{B} &= \sqrt{\mu\epsilon} (\hat{\mathbf{k}} \times \mathbf{E}) \end{aligned} \quad\quad\quad(10)

Since \mathbf{E} and \mathbf{B} both have the same phase relationships we also have

\begin{aligned}\boldsymbol{\mathcal{E}} &= -(\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})/\sqrt{\mu\epsilon} \\ \boldsymbol{\mathcal{B}} &= \sqrt{\mu\epsilon} (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}}) \end{aligned} \quad\quad\quad(12)

With phasors as used in electrical engineering it is usual to allow the fields to have complex values. Assuming this is allowed here too, taking real parts of F, and separating by grade, we have for the electric and magnetic fields

\begin{aligned}\begin{pmatrix}\mathbf{E} \\ \mathbf{B}\end{pmatrix}=\text{Real}\begin{pmatrix}\boldsymbol{\mathcal{E}} \\ \boldsymbol{\mathcal{B}}\end{pmatrix}\cos(\mathbf{k} \cdot \mathbf{x} - \omega t)+\text{Imag}\begin{pmatrix}\boldsymbol{\mathcal{E}} \\ \boldsymbol{\mathcal{B}}\end{pmatrix}\sin(\mathbf{k} \cdot \mathbf{x} - \omega t) \end{aligned} \quad\quad\quad(14)

We will find a slightly different separation into electric and magnetic fields with the geometric phasor.

Geometrized phasor.

Translating from SI units to the CGS units of Jackson the geometric phasor representation of the field was found previously to be

\begin{aligned}F = e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \end{aligned} \quad\quad\quad(15)

As above the transverse requirement \boldsymbol{\mathcal{E}} \cdot \mathbf{k} = \boldsymbol{\mathcal{B}} \cdot \mathbf{k} = 0 was required. Application of Maxwell’s equation operator should show if we require any additional constraints. That is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &=(\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \\ &=\sum \sigma_m e^{ -I \hat{\mathbf{k}} \omega t } (I k^m) e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) -I \hat{\mathbf{k}} \sqrt{\mu\epsilon} \frac{\omega}{c} e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \\ &=I \left(\mathbf{k} - \hat{\mathbf{k}} \sqrt{\mu\epsilon} \frac{\omega}{c} \right) e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon})  \end{aligned}

This is zero for any combinations of \boldsymbol{\mathcal{E}} or \boldsymbol{\mathcal{B}} since \mathbf{k} = \hat{\mathbf{k}} \sqrt{\mu\epsilon} \omega/c. It therefore appears that this geometric phasor has a fundamentally different nature than the non-geometric version. We have two exponentials that commute, but due to the difference in grades of the arguments, it doesn’t appear that there is any easy way to express this as an single argument exponential. Multiplying these out, and using the trig product to sum identities helps shed some light on the differences between the geometric phasor and the one using a generic imaginary. Starting off we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=(\cos(\omega t) -I\hat{\mathbf{k}} \sin(\omega t)) (\cos(\mathbf{k} \cdot \mathbf{x}) +I\sin(\mathbf{k} \cdot \mathbf{x})) \\ &=\cos(\omega t)\cos(\mathbf{k} \cdot \mathbf{x}) + \hat{\mathbf{k}} \sin(\omega t)\sin(\mathbf{k} \cdot \mathbf{x})-I\hat{\mathbf{k}} \sin(\omega t)\cos(\mathbf{k} \cdot \mathbf{x})+I \cos(\omega t) \sin(\mathbf{k} \cdot \mathbf{x}) \\  \end{aligned}

In this first expansion we see that this product of exponentials has scalar, vector, bivector, and pseudoscalar grades, despite the fact that we have only
vector and bivector terms in the end result. That will be seen to be due to the transverse nature of \mathcal{F} that we multiply with. Before performing that final multiplication, writing C_{-} = \cos(\omega t - \mathbf{k} \cdot \mathbf{x}), C_{+} = \cos(\omega t + \mathbf{k} \cdot \mathbf{x}), S_{-} = \sin(\omega t - \mathbf{k} \cdot \mathbf{x}), and S_{+} = \sin(\omega t + \mathbf{k} \cdot \mathbf{x}), we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}\left( (C_{-} + C_{+})+\hat{\mathbf{k}} (C_{-} - C_{+})-I \hat{\mathbf{k}} (S_{-} + S_{+})-I (S_{-} - S_{+})\right) \end{aligned} \quad\quad\quad(16)

As an operator the left multiplication of \hat{\mathbf{k}} on a transverse vector has the action

\begin{aligned}\hat{\mathbf{k}} ( \cdot ) &= \hat{\mathbf{k}} \wedge (\cdot) \\ &= I (\hat{\mathbf{k}} \times (\cdot)) \\  \end{aligned}

This gives

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}\left( (C_{-} + C_{+})+(C_{-} - C_{+}) I \hat{\mathbf{k}} \times+(S_{-} + S_{+}) \hat{\mathbf{k}} \times-I (S_{-} - S_{+})\right) \end{aligned} \quad\quad\quad(17)

Now, lets apply this to the field with \mathcal{F} = \boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}. To avoid dragging around the \sqrt{\mu\epsilon} factors, let’s also temporarily
work with units where \mu\epsilon = 1. We then have

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}) \\ &+ (C_{-} - C_{+}) (I (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}}) - \hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}}) \\ &+ (S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}} +I (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}}))  \\ &+ (S_{-} - S_{+}) (-I \boldsymbol{\mathcal{E}} + \boldsymbol{\mathcal{B}})  \end{aligned}

Rearranging explicitly in terms of the electric and magnetic field components this is

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) \boldsymbol{\mathcal{E}} -(C_{-} - C_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})+(S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+(S_{-} - S_{+}) \boldsymbol{\mathcal{B}} \\ &+{I}\left(  (C_{-} + C_{+}) \boldsymbol{\mathcal{B}}+(C_{-} - C_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+(S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})-(S_{-} - S_{+}) \boldsymbol{\mathcal{E}} \right) \\  \end{aligned}

Quite a mess! A first observation is that the application of the perpendicularity conditions (12) we have a remarkable reduction in complexity. That is

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) \boldsymbol{\mathcal{E}} +(C_{-} - C_{+}) \boldsymbol{\mathcal{E}}+(S_{-} + S_{+}) \boldsymbol{\mathcal{B}}+(S_{-} - S_{+}) \boldsymbol{\mathcal{B}}\\ &+{I}\left(  (C_{-} + C_{+}) \boldsymbol{\mathcal{B}}+(C_{-} - C_{+}) \boldsymbol{\mathcal{B}}-(S_{-} + S_{+}) \boldsymbol{\mathcal{E}}-(S_{-} - S_{+}) \boldsymbol{\mathcal{E}} \right) \\  \end{aligned}

This wipes out the receding wave terms leaving only the advanced wave terms, leaving

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= C_{-} \boldsymbol{\mathcal{E}} +S_{-} (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+{I}\left(  C_{-} \boldsymbol{\mathcal{B}} +S_{-} \hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}} \right) \\ &= C_{-} (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}})+S_{-} (\boldsymbol{\mathcal{B}} -I\boldsymbol{\mathcal{E}}) \\ &=( C_{-} -I S_{-} ) (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}) \\  \end{aligned}

We see therefore for this special case of mutually perpendicular (equ-magnitude) field components, our geometric phasor has only the advanced wave term

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} &= e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned} \quad\quad\quad(18)

If we pick this as the starting point for the assumed solution, it is clear that the same perpendicularity constraints will follow as in Jackson’s treatment, or the GA version of it above. We have something that is slightly different though, for we have no requirement to take real parts of this simpified geometric phasor, since the result already contains just the vector and bivector terms of the electric and magnetic fields respectively.

A small aside, before continuing. Having made this observation that we can write the assumed phasor for this transverse field in the form of (18) an easier way to demonstrate that the product of exponentials reduces only to the advanced wave term is now clear. Instead of using (12) we could start back at (16) and employ the absorption property \hat{\mathbf{k}} \mathcal{F} = \mathcal{F}. That gives

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&=\frac{1}{{2}}\left( (C_{-} + C_{+})+(C_{-} - C_{+})-I (S_{-} + S_{+})-I (S_{-} - S_{+})\right) \mathcal{F} \\ &=\left( C_{-} -I S_{-} \right) \mathcal{F} \end{aligned}

That’s the same result, obtained in a slicker manner. What is perhaps of more interest is examining the general split of our geometric phasor into advanced and receding wave terms, and examining the interdependence, if any, between the electric and magnetic field components. Since this didn’t lead exactly to where I expected, that’s now left as a project for a different day.

References

[1] Peeter Joot. {Space time algebra solutions of the Maxwell equation for discrete frequencies} [online]. http://sites.google.com/site/peeterjoot/math2009/maxwellVacuum.pdf.

[2] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

(A POSSIBLY WRONG) Superposition of transverse electromagnetic field solutions.

Posted by peeterjoot on August 3, 2009

[Click here for a PDF of this post with nicer formatting]

Disclaimer

FIXME: I’M NOT CONFIDENT THAT i GOT THIS RIGHT. An attempt to verify the final result didn’t work. I’ve either messed up along the way (perhaps right at the beginning), or my verification itself was busted. Am posting these working notes for now, and will revisit later after thinking it through again (or trying the verification again).

Motivation

In ([1]), a Geometric Algebra solution for the transverse components of the electromagnetic field was found. Here we construct a superposition of these transverse fields, keeping the propagation direction fixed, and allowing for continuous variation of the wave number and angular velocity. Evaluation of this superposition integral, first utilizing a contour integral, then utilizing an inverse Fourier transform allows for the determination of the functional form of a general wave packet moving along a fixed direction in space. This wave packet will be seen to have two time dependencies, an advanced time term, and a retarded time term. The phasors will be eliminated from both the propagation and the transverse fields and will provide operators with which the transverse field can be calculated from a general propagation field wave packet.

Superposition of phasor solutions.

When the field is required to have explicit sinusoidal time and propagation direction, the field components in the transverse (to the propagation) direction were found to be

\begin{aligned}F_t &= \frac{1}{{i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) }} \boldsymbol{\nabla}_t F_z \\ &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(1)

Here F_z = F_z(x,y) = \mathbf{E}_z(x,y) + I\mathbf{B}_z(x,y)/\sqrt{\mu\epsilon} is required by construction to commute with \hat{\mathbf{z}}, but is otherwise arbitrary, except perhaps for boundary value constraints not considered here.

Removing the restriction to fixed wave number and angular velocity we can integrate over both to express a more general transverse wave propagation

\begin{aligned}F_t &= \int d\omega e^{-i\omega t} \int dk e^{ikz} \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega} \end{aligned} \quad\quad\quad(3)

Inventing temporary notation for convenience we write for the frequency dependent weighting function {F_z}^{k,\omega} = F_z(x,y,k,\omega). Also note that explicit \pm k has been dropped after allowing wave number to range over both positive and negative values.

Observe that each of these integrals has the form of a Fourier integral (ignoring constant factors that can be incorporated into the weighting function). Also observe that the integral kernel has two poles on the real k-axis (or real \omega-axis). These can be utilized to evaluate one of the integrals using an upper half plane semicircular contour. FIXME: picture here.

Assuming that {F_z}^{k,\omega} is small enough at \infty (on the large contour) to be neglected, we have for the integral after integrating around the poles at k = \pm \sqrt{\mu\epsilon}{\left\lvert{\omega/c}\right\rvert}

\begin{aligned} F_t &= \pi i \int d\omega e^{-i\omega t} {\left.e^{ikz} \frac{i}{k - \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega}\right\vert}_{k=-\sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \\ &+ \pi i \int d\omega e^{-i\omega t} {\left.e^{ikz} \frac{i}{k + \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega}\right\vert}_{k=\sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \end{aligned}

Writing {F_z}^{\omega+} = F_z(x,y,\omega, k=\sqrt{\mu\epsilon}{\left\lvert{\omega}\right\rvert}/c), and {F_z}^{\omega-} = F_z(x,y,\omega, k=-\sqrt{\mu\epsilon}{\left\lvert{\omega}\right\rvert}/c), for the values of our field at the poles, we have

\begin{aligned} F_t &= \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c} z}\frac{\omega -\hat{\mathbf{z}}{\left\lvert{\omega}\right\rvert}}{{\left\lvert{\omega}\right\rvert}}\boldsymbol{\nabla}_t {F_z}^{\omega-} \\ &- \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c} z} \frac{\omega +\hat{\mathbf{z}}{\left\lvert{\omega}\right\rvert}}{{\left\lvert{\omega}\right\rvert}}\boldsymbol{\nabla}_t {F_z}^{\omega+} \end{aligned}

Can one ignore the singularity at \omega = 0 since it divides out? If so, then we have

\begin{aligned} F_t &= \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\left\lvert{\omega}\right\rvert}{c} z}(\text{sgn}(\omega) - \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} \\ &- \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\left\lvert{\omega}\right\rvert}{c} z} (\text{sgn}(\omega) + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \end{aligned}

Writing this out separately for the regions greater and lesser than zero we have

\begin{aligned} F_t &= \frac{\pi}{2}\int_{0+}^\infty d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\omega}{c} z}(1 - \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} - \frac{\pi}{2}\int_{0+}^\infty d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\omega}{c} z} (1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \\ &- \frac{\pi}{2}\int_{-\infty}^{0-} d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\omega}{c} z}(1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} - \frac{\pi}{2}\int_{-\infty}^{0-} d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\omega}{c} z} (-1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \\  \end{aligned}

Grouping by exponentials of like sign, and integrating over a region that omits some neighborhood around the origin, we have eliminated the {\left\lvert{\omega}\right\rvert} factors

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\int d\omega e^{-i\omega \left(t + \sqrt{\mu\epsilon}\frac{z}{c}\right)}\frac{\pi}{2}({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \\ &-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\int d\omega e^{-i\omega \left(t - \sqrt{\mu\epsilon}\frac{z}{c}\right)}\frac{\pi}{2}({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \end{aligned}

The Heaviside unit step function has been used to group things nicely over the doubled integration ranges, and what is left now can be observed to be Fourier transforms from the frequency to time domain. The transformed functions are to be evaluated at a shifted time in each case.

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\frac{\pi}{2}{\left.\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right)\right\vert}_{t=t + \sqrt{\mu\epsilon}\frac{z}{c}} \\ &-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\frac{\pi}{2}{\left.\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right)\right\vert}_{t=t - \sqrt{\mu\epsilon}\frac{z}{c}} \end{aligned}

Attempting to actually evaluate this Fourier transform shouldn’t actually be necessary since we the original angular velocity and wave number domain function F(x,y,\omega,k) was arbitrary (except for its commutation with \hat{\mathbf{z}}). Instead we just suppose (once again overloading the symbol F_z) that we have a function F_z(x,y,u) that at time u takes the value

\begin{aligned}F_z(x,y,u) = \frac{\pi}{2}\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right) \end{aligned}

The transform will not change the grades of the propagation field F_z so this still commutes with \hat{\mathbf{z}}. We can now write

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t F_z(x,y, t + \sqrt{\mu\epsilon}{z}/{c})-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t F_z(x,y, t - \sqrt{\mu\epsilon}{z}/{c}) \end{aligned} \quad\quad\quad(4)

So, after a bunch of manipulation, we find exactly how the transverse component of the field is related to the propagation direction field, and have eliminated the phasor description supplied our single frequency/wave-number field relations.

What we have left is a propagation field that has the form of an arbitrary unidirectional wave packet, traveling in either direction through the medium with speed \pm c/\sqrt{\mu\epsilon}. If all this math is right, the transverse field for an advancing wave is generated by application onto the propagation field of the operator

\begin{aligned}-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t \end{aligned} \quad\quad\quad(5)

Similarly, the transverse field for a receding wave is given by application of the operator

\begin{aligned}(1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t \end{aligned} \quad\quad\quad(6)

Verification.

Given the doubt about the \omega =0 point in the integral, and the possibility for sign error and other algebraic mistakes it seems worthwhile now to go back to Maxwell’s equation and verify the correctness of these results. Then, presuming everything worked out okay, it would also be good to relate things back to the electric and magnetic field components of the field.

Maxwell’s equation in these units is

\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F(x,y) = 0 \end{aligned} \quad\quad\quad(7)

and we want to compute the transverse direction projection from the propagation direction term

\begin{aligned}F_t &= \mathbf{E}_t + I \mathbf{B}_t = \frac{1}{{2}} (F - \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ F_z &= \mathbf{E}_z + I \mathbf{B}_z = \frac{1}{{2}} (F + \hat{\mathbf{z}} F \hat{\mathbf{z}})  \end{aligned} \quad\quad\quad(8)

Picking one of the terms of (4), we have

\begin{aligned}F_t = (1 + \pm (1 \mp \hat{\mathbf{z}}) \nabla_t) F_z(t \pm \sqrt{\mu\epsilon} z/c)  \end{aligned}

Substiting into the left hand operator of Maxwell’s equation (7) and reducing I get

\begin{aligned}((\hat{\mathbf{z}} \pm 1) {\nabla_t}^2 + \nabla_t) F_z + \frac{\mu\epsilon}{c}(1 \pm \hat{\mathbf{z}}) F_z' \end{aligned}

I don’t see how this would neccessarily equal zero?

References

[1] Peeter Joot. Transverse electric and magnetic fields [online]. http://sites.google.com/site/peeterjoot/math2009/transverseField.pdf.

Posted in Math and Physics Learning. | Tagged: , , , , , | 2 Comments »

Sumarizing: Transverse electric and magnetic fields

Posted by peeterjoot on August 1, 2009

[Click here for a PDF of this post with nicer formatting]

There’s potentially a lot of new ideas in the previous transverse field post (some for me even with previous exposure to the Geometric Algebra formalism). There was no real attempt to teach GA here, but for completeness the GA form of Maxwell’s equation was developed from the traditional divergence and curl formulation of Maxwell’s equations. That was mainly due to use of CGS units which differ since this makes Maxwell’s equation take a different form from the usual (see [1]).

This time a less exploratory summary of the previous results above is assembled.

In these CGS units our field F, and Maxwell’s equation (in absence of charge and current), take the form

\begin{aligned}F &= \mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \\ 0 &= \left(\boldsymbol{\nabla} + \frac{\sqrt{\mu\epsilon}}{c}\partial_t\right) F  \end{aligned} \quad\quad\quad(30)

The electric and magnetic fields can be picked off by selecting the grade one (vector) components

\begin{aligned}\mathbf{E} &= {\left\langle{{F}}\right\rangle}_{1} \\ \mathbf{B} &= {\left\langle{{-I F}}\right\rangle}_{1} \end{aligned} \quad\quad\quad(32)

With an explicit sinusoidal and z-axis time dependence for the field

\begin{aligned}F(x,y,z,t) &= F(x,y) e^{\pm i k z - i \omega t}  \end{aligned} \quad\quad\quad(34)

and a split of the gradient into transverse and z-axis components \boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z, Maxwell’s equation takes the form

\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F(x,y) = 0 \end{aligned} \quad\quad\quad(35)

Writing for short F = F(x,y), we can split the field into transverse and z-axis components with the commutator and anticommutator products respectively. For the z-axis components we have

\begin{aligned}F_z \hat{\mathbf{z}} \equiv E_z + I B_z = \frac{1}{{2}} (F \hat{\mathbf{z}} + \hat{\mathbf{z}} F)  \end{aligned} \quad\quad\quad(36)

The projections onto the z-axis and and transverse directions are respectively

\begin{aligned}F_z &= \mathbf{E}_z + I \mathbf{B}_z = \frac{1}{{2}} (F + \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ F_t &= \mathbf{E}_t + I \mathbf{B}_t = \frac{1}{{2}} (F - \hat{\mathbf{z}} F \hat{\mathbf{z}} ) \end{aligned} \quad\quad\quad(37)

With an application of the transverse gradient to the z-axis field we easily found the relation between the two
field components

\begin{aligned}\boldsymbol{\nabla}_t F_z &= i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) F_t \end{aligned} \quad\quad\quad(39)

A left division of the exponential factor gives the total transverse field

\begin{aligned}F_t &= \frac{1}{{i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) }} \boldsymbol{\nabla}_t F_z  \end{aligned} \quad\quad\quad(40)

Multiplication of both the numerator and denominator by the conjugate normalizes this

\begin{aligned}F_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(41)

From this the transverse electric and magnetic fields may be picked off using the projective grade selection operations of (32), and are

\begin{aligned}\mathbf{E}_t &= \frac{i}{\mu\epsilon\frac{\omega^2}{c^2} -k^2} \left( \pm k \boldsymbol{\nabla}_t E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \\ \mathbf{B}_t &= \frac{i}{\mu\epsilon\frac{\omega^2}{c^2} -k^2} \left( {\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z \pm k \boldsymbol{\nabla}_t B_z \right)  \end{aligned} \quad\quad\quad(42)

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »