Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘Poynting vector’

Geometry of general Jones vector (problem 2.8)

Posted by peeterjoot on August 9, 2012

[Click here for a PDF of this post with nicer formatting]

Another problem from [1].


The general case is represented by the Jones vector

\begin{aligned}\begin{bmatrix}A \\ B e^{i\Delta}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.1.1)

Show that this represents elliptically polarized light in which the major axis of the ellipse makes an angle

\begin{aligned}\frac{1}{{2}} \tan^{-1} \left( \frac{2 A B \cos \Delta }{A^2 - B^2} \right),\end{aligned} \hspace{\stretch{1}}(1.1.2)

with the x axis.


Prior to attempting the problem as stated, let’s explore the algebra of a parametric representation of an ellipse, rotated at an angle \theta as in figure (1). The equation of the ellipse in the rotated coordinates is

Figure 1: Rotated ellipse


\begin{aligned}\begin{bmatrix}x' \\ y'\end{bmatrix}=\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.3)

which is easily seen to have the required form

\begin{aligned}\left( \frac{x'}{a} \right)^2+\left( \frac{y'}{b} \right)^2 = 1.\end{aligned} \hspace{\stretch{1}}(1.2.4)

We’d like to express x' and y' in the “fixed” frame. Consider figure (2) where our coordinate conventions are illustrated. With

Figure 2: 2d rotation of frame


\begin{aligned}\begin{bmatrix}\hat{\mathbf{x}}' \\ \hat{\mathbf{y}}'\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{x}} e^{\hat{\mathbf{x}} \hat{\mathbf{y}} \theta} \\ \hat{\mathbf{y}} e^{\hat{\mathbf{x}} \hat{\mathbf{y}} \theta}\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{x}} \cos \theta + \hat{\mathbf{y}} \sin\theta \\ \hat{\mathbf{y}} \cos \theta - \hat{\mathbf{x}} \sin\theta\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.5)

and x \hat{\mathbf{x}} + y\hat{\mathbf{y}} = x' \hat{\mathbf{x}} + y' \hat{\mathbf{y}} we find

\begin{aligned}\begin{bmatrix}x' \\ y'\end{bmatrix}=\begin{bmatrix}\cos \theta & \sin\theta \\ -\sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.22)

so that the equation of the ellipse can be stated as

\begin{aligned}\begin{bmatrix}\cos \theta & \sin\theta \\ -\sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.7)


\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}\cos \theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}a \cos u \\ b \sin u\end{bmatrix}=\begin{bmatrix}a \cos \theta \cos u - b \sin \theta \sin u \\ a \sin \theta \cos u + b \cos \theta \sin u\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2.8)

Observing that

\begin{aligned}\cos u + \alpha \sin u = \text{Real}\left( (1 + i \alpha) e^{-i u} \right)\end{aligned} \hspace{\stretch{1}}(1.2.9)

we have, with \text{atan2} = \text{atan2}(x, y) a Jones vector representation of our rotated ellipse

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\text{Real}\begin{bmatrix}( a \cos \theta - i b \sin\theta ) e^{-iu} \\ ( a \sin \theta + i b \cos\theta ) e^{-iu}\end{bmatrix}=\text{Real}\begin{bmatrix}\sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta } e^{i \text{atan2}(a \cos\theta, -b\sin\theta) - i u} \\ \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } e^{i \text{atan2}(a \sin\theta, b\cos\theta) - i u}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2.10)

Since we can absorb a constant phase factor into our -iu argument, we can write this as

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\text{Real}\left(\begin{bmatrix}\sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta } \\ \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } e^{i \text{atan2}(a \sin\theta, b\cos\theta) -i \text{atan2}(a \cos\theta, -b\sin\theta)} \end{bmatrix} e^{-i u'}\right).\end{aligned} \hspace{\stretch{1}}(1.2.11)

This has the required form once we make the identifications

\begin{aligned}A = \sqrt{ a^2 \cos^2 \theta + b^2 \sin^2 \theta }\end{aligned} \hspace{\stretch{1}}(1.2.12)

\begin{aligned}B = \sqrt{ a^2 \sin^2 \theta + b^2 \cos^2 \theta } \end{aligned} \hspace{\stretch{1}}(1.2.13)

\begin{aligned}\Delta =\text{atan2}(a \sin\theta, b\cos\theta) - \text{atan2}(a \cos\theta, -b\sin\theta).\end{aligned} \hspace{\stretch{1}}(1.2.14)

What isn’t obvious is that we can do this for any A, B, and \Delta. Portions of this problem I tried in Mathematica starting from the elliptic equation derived in section 8.1.3 of [2]. I’d used Mathematica since on paper I found the rotation angle that eliminated the cross terms to always be 45 degrees, but this turns out to have been because I’d first used a change of variables that scaled the equation. Here’s the whole procedure without any such scaling to arrive at the desired result for this problem. Our starting point is the Jones specified field, again as above I’ve using -iu = i (k z - \omega t)

\begin{aligned}\mathbf{E} = \text{Real}\left( \begin{bmatrix}A \\ B e^{i \Delta}\end{bmatrix}e^{-i u}\right)=\begin{bmatrix}A \cos u \\ B \cos ( \Delta - u )\end{bmatrix}e^{-i u}\end{aligned} \hspace{\stretch{1}}(1.2.15)

We need our cosine angle addition formula

\begin{aligned}\cos( a + b ) = \text{Real} \left( (\cos a + i \sin a)(\cos b + i \sin b)\right) =\cos a \cos b - \sin a \sin b.\end{aligned} \hspace{\stretch{1}}(1.2.16)

Using this and writing \mathbf{E} = (x, y) we have

\begin{aligned}x = A \cos u\end{aligned} \hspace{\stretch{1}}(1.2.17)

\begin{aligned}y = B ( \cos \Delta \cos u + \sin \Delta \sin u ).\end{aligned} \hspace{\stretch{1}}(1.2.18)

Subtracting x \cos \Delta/A from y/B we have

\begin{aligned}\frac{y}{B} - \frac{x}{A} \cos \Delta = \sin \Delta \sin u.\end{aligned} \hspace{\stretch{1}}(1.2.27)

Squaring this and using \sin^2 u = 1 - \cos^2 u, and 1.2.17 we have

\begin{aligned}\left( \frac{y}{B} - \frac{x}{A} \cos \Delta \right)^2 = \sin^2 \Delta \left( 1 - \frac{x^2}{A^2} \right),\end{aligned} \hspace{\stretch{1}}(1.2.27)

which expands and simplifies to

\begin{aligned}\left( \frac{x}{A} \right)^2 +\left( \frac{y}{B} \right)^2 - 2 \left( \frac{x}{A} \right)\left( \frac{y}{B} \right)\cos \Delta = \sin^2 \Delta,\end{aligned} \hspace{\stretch{1}}(1.2.27)

which is an equation of a rotated ellipse as desired. Let’s figure out the angle of rotation required to kill the cross terms. Writing a = 1/A, b = 1/B and rotating our primed coordinate frame by \theta degrees

\begin{aligned}\begin{bmatrix}x \\ y\end{bmatrix}=\begin{bmatrix}\cos \theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}\begin{bmatrix}x' \\ y'\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.2.22)

we have

\begin{aligned}\begin{aligned}\sin^2 \Delta &=a^2 (x' \cos \theta - y'\sin\theta)^2+b^2 ( x' \sin\theta + y' \cos\theta)^2 \\ &- 2 a b (x' \cos \theta - y'\sin\theta)( x'\sin\theta + y'\cos\theta)\cos \Delta \\ &=(x')^2 ( a^2 \cos^2 \theta + b^2 \sin^2 \theta - 2 a b \cos \theta \sin \theta \cos \Delta ) \\ &+(y')^2 ( a^2 \sin^2 \theta + b^2 \cos^2 \theta + 2 a b \cos \theta \sin \theta \cos \Delta ) \\ &+ 2 x' y' ( (b^2 -a^2) \cos \theta \sin\theta + a b (\sin^2 \theta - \cos^2 \theta) \cos \Delta ).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.27)

To kill off the cross term we require

\begin{aligned}\begin{aligned}0 &= (b^2 -a^2) \cos \theta \sin\theta + a b (\sin^2 \theta - \cos^2 \theta) \cos \Delta \\ &= \frac{1}{{2}} (b^2 -a^2) \sin (2 \theta) - a b \cos (2 \theta) \cos \Delta,\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.27)


\begin{aligned}\tan (2 \theta) = \frac{2 a b \cos \Delta}{b^2 - a^2} = \frac{2 A B \cos \Delta}{A^2 - B^2}.\end{aligned} \hspace{\stretch{1}}(1.2.27)

This yields 1.1.2 as desired. We also end up with expressions for our major and minor axis lengths, which are respectively for \sin \Delta \ne 0

\begin{aligned}\sin\Delta/ \sqrt{ b^2 + (a^2 - b^2) \cos^2 \theta - a b \sin (2 \theta) \cos \Delta }\end{aligned} \hspace{\stretch{1}}(1.2.27)

\begin{aligned}\sin\Delta/\sqrt{ b^2 + (a^2 - b^2)\sin^2 \theta + a b \sin (2 \theta) \cos \Delta },\end{aligned} \hspace{\stretch{1}}(1.2.27)

which completes the task of determining the geometry of the elliptic parameterization we see results from the general Jones vector description.


[1] G.R. Fowles. Introduction to modern optics. Dover Pubns, 1989.

[2] E. Hecht. Optics. 1998.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Complex form of Poynting relationship

Posted by peeterjoot on August 2, 2012

[Click here for a PDF of this post with nicer formatting]

This is a problem from [1], something that I’d tried back when reading [2] but in a way that involved Geometric Algebra and the covariant representation of the energy momentum tensor. Let’s try this with plain old complex vector algebra instead.

Question: Average Poynting flux for complex 2D fields (problem 2.4)

Given a complex field phasor representation of the form

\begin{aligned}\tilde{\mathbf{E}} = \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.0.1)

\begin{aligned}\tilde{\mathbf{H}} = \mathbf{H}_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Here we allow the components of \mathbf{E}_0 and \mathbf{H}_0 to be complex. As usual our fields are defined as the real parts of the phasors

\begin{aligned}\mathbf{E} = \text{Real}( \tilde{\mathbf{E}} )\end{aligned} \hspace{\stretch{1}}(1.0.3)

\begin{aligned}\mathbf{H} = \text{Real}( \tilde{\mathbf{H}} ).\end{aligned} \hspace{\stretch{1}}(1.0.4)

Show that the average Poynting vector has the value

\begin{aligned}\left\langle{{ \mathbf{S} }}\right\rangle = \left\langle{{ \mathbf{E} \times \mathbf{H} }}\right\rangle = \frac{1}{{2}} \text{Real}( \mathbf{E}_0 \times \mathbf{H}_0^{*} ).\end{aligned} \hspace{\stretch{1}}(1.0.5)


While the text works with two dimensional quantities in the x,y plane, I found this problem easier when tackled in three dimensions. Suppose we write the complex phasor components as

\begin{aligned}\mathbf{E}_0 = \sum_k (\mathbf{E}_{kr} + i \mathbf{E}_{ki}) \mathbf{e}_k = \sum_k {\left\lvert{\mathbf{E}_k}\right\rvert} e^{i \phi_k} \mathbf{e}_k\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}\mathbf{H}_0 = \sum_k (\mathbf{H}_{kr} + i \mathbf{H}_{ki}) \mathbf{e}_k = \sum_k {\left\lvert{\mathbf{H}_k}\right\rvert} e^{i \psi_k} \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(1.0.7)

and also write \phi_k' = \phi_k + \mathbf{k} \cdot \mathbf{x}, and \psi_k' = \psi_k + \mathbf{k} \cdot \mathbf{x}, then our (real) fields are

\begin{aligned}\mathbf{E} = \sum_k {\left\lvert{\mathbf{E}_k}\right\rvert} \cos(\phi_k' - \omega t) \mathbf{e}_k\end{aligned} \hspace{\stretch{1}}(1.0.8)

\begin{aligned}\mathbf{H} = \sum_k {\left\lvert{\mathbf{H}_k}\right\rvert} \cos(\psi_k' - \omega t) \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(1.0.9)

and our Poynting vector before averaging (in these units) is

\begin{aligned}\mathbf{E} \times \mathbf{H} = \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos(\phi_k' - \omega t) \cos(\psi_l' - \omega t) \epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We are tasked with computing the average of cosines

\begin{aligned}\left\langle{{ \cos(a - \omega t) \cos(b - \omega t) }}\right\rangle=\frac{1}{{T}} \int_0^T \cos(a - \omega t) \cos(b - \omega t) dt=\frac{1}{{\omega T}} \int_0^T \cos(a - \omega t) \cos(b - \omega t) \omega dt=\frac{1}{{2 \pi}} \int_0^{2 \pi}\cos(a - u) \cos(b - u) du=\frac{1}{{4 \pi}} \int_0^{2 \pi}\cos(a + b - 2 u) + \cos(a - b) du=\frac{1}{{2}} \cos(a - b).\end{aligned} \hspace{\stretch{1}}(1.0.11)

So, our average Poynting vector is

\begin{aligned}\left\langle{{\mathbf{E} \times \mathbf{H}}}\right\rangle = \frac{1}{{2}} \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos(\phi_k - \psi_l) \epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.12)

We have only to compare this to the desired expression

\begin{aligned}\frac{1}{{2}} \text{Real}( \mathbf{E}_0 \times \mathbf{H}_0^{*} )= \frac{1}{{2}} \sum_{klm} \text{Real}\left({\left\lvert{\mathbf{E}_k}\right\rvert} e^{i\phi_k}{\left\lvert{\mathbf{H}_l}\right\rvert} e^{-i\psi_l}\right)\epsilon_{klm} \mathbf{e}_m = \frac{1}{{2}} \sum_{klm} {\left\lvert{\mathbf{E}_k}\right\rvert} {\left\lvert{\mathbf{H}_l}\right\rvert} \cos( \phi_k - \psi_l )\epsilon_{klm} \mathbf{e}_m.\end{aligned} \hspace{\stretch{1}}(1.0.13)

This proves the desired result.


[1] G.R. Fowles. Introduction to modern optics. Dover Pubns, 1989.

[2] JD Jackson. Classical Electrodynamics Wiley. John Wiley and Sons, 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , | 2 Comments »

PHY450HS1. Relativistic electrodynamics. Problem Set 5.

Posted by peeterjoot on April 7, 2011

[Click here for a PDF of this post with nicer formatting]

Problem 1. Sinusoidal current density on an infinite flat conducting sheet.


An infinitely thin flat conducting surface lying in the x-z plane carries a surface current density:

\begin{aligned}\boldsymbol{\kappa} = \mathbf{e}_3 \theta(t) \kappa_0 \sin\omega t\end{aligned} \hspace{\stretch{1}}(1.1)

Here \mathbf{e}_3 is a unit vector in the z direction, \kappa_0 is the peak value of the current density, and \theta(t) is the theta function: \theta(t  0) = 1.

\item Write down the equations determining the electromagnetic potentials. Specify which gauge you choose to work in.
\item Find the electromagnetic potentials outside the plane.
\item Find the electric and magnetic fields outside the plane.
\item Give a physical interpretation of the results of the previous section. Do they agree with your qualitative expectations?
\item Find the direction and magnitude of the energy flux outside the plane.
\item Consider a point at some distance from the plane. Sketch the intensity of the electromagnetic field near this point as a function of time. Explain physically.
\item Consider now a point near the plane. Are the electric and magnetic fields you found continuous across the conducting plane? Explain.

1-2. Determining the electromagnetic potentials.

Augmenting the surface current density with a delta function we can form the current density for the system

\begin{aligned}\mathbf{J} = \delta(y) \boldsymbol{\kappa} = \mathbf{e}_3 \theta(t) \delta(y) \kappa_0 \sin\omega t.\end{aligned} \hspace{\stretch{1}}(1.2)

With only a current distribution specified use of the Coulomb gauge allows for setting the scalar potential on the surface equal to zero, so that we have

\begin{aligned}\square \mathbf{A} &= \frac{4 \pi \mathbf{J}}{c} \\ \mathbf{E} &= - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(1.3)

Utilizing our Green’s function

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert}} = \delta^3(\mathbf{x})\delta(t),\end{aligned} \hspace{\stretch{1}}(1.6)

we can invert our vector potential equation, solving for \mathbf{A}

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= \int d^3 \mathbf{x}' dt' \square_{\mathbf{x}', t'} G(\mathbf{x} - \mathbf{x}', t - t') \mathbf{A}(\mathbf{x}', t') \\ &= \int d^3 \mathbf{x}' dt' G(\mathbf{x} - \mathbf{x}', t - t') \frac{4 \pi \mathbf{J}(\mathbf{x}', t')}{c} \\ &= \int d^3 \mathbf{x}' dt' \frac{\delta(t - t' - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\frac{4 \pi \mathbf{J}(\mathbf{x}', t')}{c} \\ &= \int d^3 \mathbf{x}' \frac{\mathbf{J}(\mathbf{x}', t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c}{c {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} \\ &= \frac{1}{{c}} \int dx' dy' dz'\mathbf{e}_3 \theta(t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c) \delta(y) \kappa_0 \sin(\omega(t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c))\frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}} \\ &= \frac{\mathbf{e}_3 \kappa_0}{c} \int dx' dz'\theta(t - {\left\lvert{\mathbf{x} -(x', 0, z')}\right\rvert}/c) \sin(\omega(t - {\left\lvert{\mathbf{x} -(x', 0, z')}\right\rvert}/c))\frac{1}{{{\left\lvert{\mathbf{x} - (x', 0, z')}\right\rvert}}} \\ &= \frac{\mathbf{e}_3 \kappa_0}{c} \int dx' dz'\theta\left(t - \frac{1}{{c}} \sqrt{(x-x')^2 + y^2 + (z-z')^2}\right) \frac{\sin\left(\omega\left(t - \frac{1}{{c}} \sqrt{(x-x')^2 + y^2 + (z-z')^2}\right)\right)}{\sqrt{(x-x')^2 + y^2 + (z-z')^2}}\end{aligned}

Now a switch to polar coordinates makes sense. Let’s use

\begin{aligned}x' - x &= r \cos\alpha \\ z' - z &= r \sin\alpha \end{aligned} \hspace{\stretch{1}}(1.7)

This gives us

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= \frac{\mathbf{e}_3 \kappa_0}{c} \int_{r=0}^\infty \int_{\alpha=0}^{2\pi} r dr d\alpha\theta\left(t - \frac{1}{{c}} \sqrt{r^2 + y^2}\right) \frac{\sin\left(\omega\left(t - \frac{1}{{c}} \sqrt{r^2 + y^2 }\right)\right)}{\sqrt{r^2 + y^2}} \\ &= \frac{2 \pi \mathbf{e}_3 \kappa_0}{c} \int_{r=0}^\infty r dr \theta\left(t - \frac{1}{{c}} \sqrt{r^2 + y^2}\right) \frac{\sin\left(\omega\left(t - \frac{1}{{c}} \sqrt{r^2 + y^2 }\right)\right)}{\sqrt{r^2 + y^2}} \\ \end{aligned}

Since the theta function imposes a

\begin{aligned}t - \frac{1}{{c}} \sqrt{r^2 + y^2 } > 0\end{aligned} \hspace{\stretch{1}}(1.9)

constraint, equivalent to

\begin{aligned}c^2 t^2 > r^2 + y^2,\end{aligned} \hspace{\stretch{1}}(1.10)

we can reduce the upper range of the integral and drop the theta function explicitly

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \frac{2 \pi \mathbf{e}_3 \kappa_0}{c} \int_{r=0}^{\sqrt{c^2 t^2 - y^2}} r dr \frac{\sin\left(\omega\left(t - \frac{1}{{c}} \sqrt{r^2 + y^2 }\right)\right)}{\sqrt{r^2 + y^2}} \end{aligned} \hspace{\stretch{1}}(1.11)

Here I got lazy and used Mathematica to help evaluate this integral, for an end result of

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \frac{2 \pi \kappa_0 \omega}{c^2} \mathbf{e}_3 (1 - \cos(\omega(t - {\left\lvert{y}\right\rvert}/c))).\end{aligned} \hspace{\stretch{1}}(1.12)

3. Find the electric and magnetic fields outside the plane.

Our electric field can be calculated by inspection

\begin{aligned}\mathbf{E} = -\frac{1}{{c}}\frac{\partial {\mathbf{A}}}{\partial {t}}= -\frac{2 \pi \kappa_0 \omega^2}{c^3} \mathbf{e}_3 \sin(\omega(t - {\left\lvert{y}\right\rvert}/c)).\end{aligned} \hspace{\stretch{1}}(1.13)

For the magnetic field we have

\begin{aligned}\mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A} \\ &= -\frac{2 \pi \kappa_0 \omega}{c^2} \mathbf{e}_3 \times \boldsymbol{\nabla} (1 -\cos(\omega(t - {\left\lvert{y}\right\rvert}/c)))) \\ &= \frac{2 \pi \kappa_0 \omega}{c^2} (-\sin(\omega(t - {\left\lvert{y}\right\rvert}/c))\mathbf{e}_3 \times \boldsymbol{\nabla} \omega(t - {\left\lvert{y}\right\rvert}/c) \\ &= \frac{2 \pi \kappa_0 \omega^2}{c^3} \sin(\omega(t - {\left\lvert{y}\right\rvert}/c))\mathbf{e}_3 \times  \boldsymbol{\nabla} {\left\lvert{y}\right\rvert} \\ &= \frac{2 \pi \kappa_0 \omega^2}{c^3} \sin(\omega(t - {\left\lvert{y}\right\rvert}/c)) \mathbf{e}_3 \times \mathbf{e}_2,\end{aligned}

which gives us

\begin{aligned}\mathbf{B} = -\frac{2 \pi \kappa_0 \omega^2}{c^3}\mathbf{e}_1 \sin(\omega(t - {\left\lvert{y}\right\rvert}/c) \end{aligned} \hspace{\stretch{1}}(1.14)

4. Give a physical interpretation of the results of the previous section.

It was expected that the lack of boundary on the conducting sheet would make the potential away from the plane only depend on the y components of the spatial distance, and this is precisely what we find performing the grunt work of the integration.

Given that we had a sinusoidal forcing function for our wave equation, it seems logical that we also find our non-homogeneous solution to the wave equation has sinusoidal dependence. We find that the sinusoidal current results in sinusoidal potentials and fields very much like one has in the electric circuits problem that we solve with phasors in engineering applications.

We find that the electric and magnetic fields are oriented parallel to the plane containing the surface current density, with the electric field in the direction of the current, and the magnetic field perpendicular to that, but having energy propagate outwards from the plane.

It’s kind of curious that despite introducing a step function in time for the current, that the end result appears to have no constraints on time t > 0, and that we have fields at all points in space and time from this current distribution, even the t < 0 case (ie: we have a non-causal system). I'd have expected a transient response to switching on the current. Perhaps this is because instantaneously inducing a current on all points in an infinite sheet is not physically realizable?

5. Find the direction and magnitude of the energy flux outside the plane.

Our energy flux, the Poynting vector, is

\begin{aligned}\mathbf{S} = \frac{c}{4\pi}\left( \frac{2 \pi \kappa_0 \omega^2}{c^3} \right)^2 \sin^2(\omega(t - {\left\lvert{y}\right\rvert}/c) \mathbf{e}_3 \times \mathbf{e}_1.\end{aligned} \hspace{\stretch{1}}(1.15)

This is

\begin{aligned}\mathbf{S} = \frac{ \pi \kappa_0^2 \omega^4 }{c^5} \sin^2(\omega(t - {\left\lvert{y}\right\rvert}/c) \mathbf{e}_2= \frac{ \pi \kappa_0^2 \omega^4 }{2 c^5} (1 - \cos( 2 \omega(t - {\left\lvert{y}\right\rvert}/c) ) ) \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(1.16)

This energy flux is directed outwards along the y axis, with magnitude oscillating around an average value of

\begin{aligned}{\left\lvert{\left\langle{{S}}\right\rangle}\right\rvert} = \frac{ \pi \kappa_0^2 \omega^4 }{2 c^5}.\end{aligned} \hspace{\stretch{1}}(1.17)

6. Sketch the intensity of the electromagnetic field far from the plane.

I’m assuming here that this question does not refer to the flux intensity \left\langle{\mathbf{S}}\right\rangle, since that is constant, and boring to sketch.

The time varying portion of either the electric or magnetic field is proportional to

\begin{aligned}\sin( \omega t - \omega {\left\lvert{y}\right\rvert}/c )\end{aligned} \hspace{\stretch{1}}(1.18)

We have a sinusoid as a function of time, of period T = 2 \pi/\omega where the phase is adjusted at each position by the factor \omega {\left\lvert{y}\right\rvert}/c. Every increase of \Delta y = 2 \pi c/\omega shifts the waveform back.

A sketch is attached.

7. Continuity across the plane?

It is sufficient to consider either the electric or magnetic field for the continuity question since the continuity is dictated by the sinusoidal term for both fields.

The point in time only changes the phase, so let’s consider the electric field at t=0, and an infinitesimal distance y = \pm \epsilon c/\omega. At either point we have

\begin{aligned}\mathbf{E}(0, \pm \epsilon c/\omega, 0, 0) = \frac{ 2 \pi \kappa_0 \omega^2 }{c^3} \mathbf{e}_3 \epsilon\end{aligned} \hspace{\stretch{1}}(1.19)

In the limit as \epsilon \rightarrow 0 the field strength matches on either side of the plane (and happens to equal zero for this t= 0 case).

We have a discontinuity in the spatial derivative of either field near the plate, but not for the fields themselves. A plot illustrates this nicely

\caption{\sin(t - {\left\lvert{y}\right\rvert})}

Problem 2. Fields generated by an arbitrarily moving charge.


Show that for a particle moving on a worldline parametrized by (ct, \mathbf{x}_c(t)), the retarded time t_r with respect to an arbitrary space time point (ct, \mathbf{x}), defined in class as:

\begin{aligned}{\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert} = c(t - t_r)\end{aligned} \hspace{\stretch{1}}(2.20)


\begin{aligned}\boldsymbol{\nabla} t_r = -\frac{\mathbf{x} - \mathbf{x}_c(t_r)}{c {\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert} = c(t - t_r) - \mathbf{v}_c(t_r) \cot (\mathbf{x} - \mathbf{x}_c(t_r))}\end{aligned} \hspace{\stretch{1}}(2.21)


\begin{aligned}\frac{\partial {t_r}}{\partial {t}} = \frac{c {\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert}}{c {\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert} = c(t - t_r) - \mathbf{v}_c(t_r) \cot (\mathbf{x} - \mathbf{x}_c(t_r))}\end{aligned} \hspace{\stretch{1}}(2.22)

\item Then, use these to derive the expressions for \mathbf{E} and \mathbf{B} given in the book (and in the class notes).
\item Finally, re-derive the already familiar expressions for the EM fields of a particle moving with uniform velocity.

0. Solution. Gradient and time derivatives of the retarded time function.

Let’s use notation something like our text [1], where the solution to this problem is outlined in section 63, and write

\begin{aligned}\mathbf{R}(t_r) &= \mathbf{x} - \mathbf{x}_c(t_r) \\ R &= {\left\lvert{\mathbf{R}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.23)


\begin{aligned}\frac{\partial {\mathbf{R}}}{\partial {t_r}} = - \mathbf{v}_c.\end{aligned} \hspace{\stretch{1}}(2.25)

From R^2 = \mathbf{R} \cdot \mathbf{R} we also have

\begin{aligned}2 R \frac{\partial {R}}{\partial {t_r}} = 2 \mathbf{R} \cdot \frac{\partial {\mathbf{R}}}{\partial {t_r}},\end{aligned} \hspace{\stretch{1}}(2.26)

so if we write

\begin{aligned}\hat{\mathbf{R}} = \frac{\mathbf{R}}{R},\end{aligned} \hspace{\stretch{1}}(2.27)

we have

\begin{aligned}R'(t_r) = -\hat{\mathbf{R}} \cdot \mathbf{v}_c.\end{aligned} \hspace{\stretch{1}}(2.28)

Proceeding in the manner of the text, we have

\begin{aligned}\frac{\partial {R}}{\partial {t}} = \frac{\partial {R}}{\partial {t_r}} \frac{\partial {t_r}}{\partial {t}} = -\hat{\mathbf{R}} \cdot \mathbf{v}_c \frac{\partial {t_r}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(2.29)

From 2.20 we also have

\begin{aligned}R = {\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert} = c(t - t_r),\end{aligned} \hspace{\stretch{1}}(2.30)


\begin{aligned}\frac{\partial {R}}{\partial {t}} = c\left(1 - \frac{\partial {t_r}}{\partial {t}}\right).\end{aligned} \hspace{\stretch{1}}(2.31)

This and 2.29 gives us

\begin{aligned}\boxed{\frac{\partial {t_r}}{\partial {t}} = \frac{1}{{ 1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} }}}\end{aligned} \hspace{\stretch{1}}(2.32)

For the gradient we operate on the implicit equation 2.30 again. This gives us

\begin{aligned}\boldsymbol{\nabla} R = \boldsymbol{\nabla} (c t - c t_r) = - c \boldsymbol{\nabla} t_r.\end{aligned} \hspace{\stretch{1}}(2.33)

However, we can also use the spatial definition of R = {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}. Note that this distance R = R(t_r) is a function of space and time, since t_r = t_r(\mathbf{x}, t) is implicitly a function of the spatial and time positions at which the retarded time is to be measured.

\begin{aligned}\boldsymbol{\nabla} R &=\boldsymbol{\nabla} \sqrt{(\mathbf{x} - \mathbf{x}_c(t_r))^2} \\ &=\frac{1}{{2R}} \boldsymbol{\nabla} (\mathbf{x} - \mathbf{x}_c(t_r))^2 \\ &=\frac{1}{{R}} (x^\beta - x_c^\beta) \mathbf{e}_\alpha \partial_\alpha (x^\beta - x_c^\beta(t_r)) \\ &=\frac{1}{{R}} (\mathbf{R})_\beta \mathbf{e}_\alpha ({\delta_\alpha}^\beta - \partial_\alpha x_c^\beta(t_r)) \\ \end{aligned}

We have only this bit \partial_\alpha x_c^\beta(t_r) to expand, but that’s just going to require a chain rule expansion. This is easier to see in a more generic form

\begin{aligned}\frac{\partial {f(g)}}{\partial {x^\alpha}} = \frac{\partial {f}}{\partial {g}} \frac{\partial {g}}{\partial {x^\alpha}},\end{aligned} \hspace{\stretch{1}}(2.34)

so we have

\begin{aligned}\frac{\partial {x_c^\beta(t_r)}}{\partial {x^\alpha}} = \frac{\partial {x_c^\beta(t_r)}}{\partial {t_r}} \frac{\partial {t_r}}{\partial {x^\alpha}},\end{aligned} \hspace{\stretch{1}}(2.35)

which gets us close to where we want to be

\begin{aligned}\boldsymbol{\nabla} R&=\frac{1}{{R}} \left(\mathbf{R} - (\mathbf{R})_\beta \frac{\partial {x_c^\beta(t_r)}}{\partial {t_r}} \mathbf{e}_\alpha \frac{\partial {t_r}}{\partial {x^\alpha}} \right) \\ &=\frac{1}{{R}} \left(\mathbf{R} - \mathbf{R} \cdot \frac{\partial {\mathbf{x}_c^\beta(t_r)}}{\partial {t_r}} \boldsymbol{\nabla} t_r \right)\end{aligned}

Putting the pieces together we have only minor algebra left since we can now equate the two expansions of \boldsymbol{\nabla} R

\begin{aligned}- c \boldsymbol{\nabla} t_r = \hat{\mathbf{R}} - \hat{\mathbf{R}} \cdot \mathbf{v}_c(t_r) \boldsymbol{\nabla} t_r.\end{aligned} \hspace{\stretch{1}}(2.36)

This is given in the text, but these in between steps are left for us and for our homework assignments! From this point we can rearrange to find the desired result

\begin{aligned}\boxed{\boldsymbol{\nabla} t_r = -\frac{1}{{c}} \frac{\hat{\mathbf{R}} }{ 1 - \hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} } = - \frac{\hat{\mathbf{R}}}{c} \frac{\partial {t_r}}{\partial {t}}}\end{aligned} \hspace{\stretch{1}}(2.37)

1. Solution. Computing the EM fields from the Lienard-Wiechert potentials.

Now we are ready to derive the values of \mathbf{E} and \mathbf{B} that arise from the Lienard-Wiechert potentials. We have for the electric field.

We’ll evaluate

\begin{aligned}\mathbf{E} &= -\frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} - \boldsymbol{\nabla} \phi \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{B}\end{aligned}

For the electric field we’ll use the chain rule on the vector potential

\begin{aligned}\frac{\partial {\mathbf{A}}}{\partial {t}} = \frac{\partial {t_r}}{\partial {t}} \frac{\partial {\mathbf{A}}}{\partial {t_r}}.\end{aligned} \hspace{\stretch{1}}(2.38)

Similarly for the gradient of the scalar potential we have

\begin{aligned}\boldsymbol{\nabla} \phi &=\mathbf{e}_\alpha \frac{\partial {\phi}}{\partial {x^\alpha}} \\ &=\mathbf{e}_\alpha \frac{\partial {\phi}}{\partial {t_r}} \frac{\partial {t_r}}{\partial {x^\alpha}} \\ &=\frac{\partial {\phi}}{\partial {t_r}} \boldsymbol{\nabla} t_r \\ &=- \frac{\hat{\mathbf{R}}}{c} \frac{\partial {t_r}}{\partial {t}}.\end{aligned}

Our electric field is thus

\begin{aligned}\mathbf{E} = - \frac{\partial {t_r}}{\partial {t}} \left( \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t_r}} - \frac{\hat{\mathbf{R}}}{c} \frac{\partial {\phi}}{\partial {t_r}} \right)\end{aligned} \hspace{\stretch{1}}(2.39)

For the magnetic field we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{A}&=\mathbf{e}_\alpha \times \frac{\partial {\mathbf{A}}}{\partial {x^\alpha}} \\ &=\mathbf{e}_\alpha \times \frac{\partial {\mathbf{A}}}{\partial {t_r}} \frac{\partial {t_r}}{\partial {x^\alpha}}.\end{aligned}

The magnetic field will therefore be found by evaluating

\begin{aligned}\mathbf{B} = (\boldsymbol{\nabla} t_r) \times \frac{\partial {\mathbf{A}}}{\partial {t_r}} = - \frac{\partial {t_r}}{\partial {t}} \frac{\hat{\mathbf{R}}}{c} \times \frac{\partial {\mathbf{A}}}{\partial {t_r}} \end{aligned} \hspace{\stretch{1}}(2.40)

Let’s compare this to \hat{\mathbf{R}} \times \mathbf{E}

\begin{aligned}\hat{\mathbf{R}} \times \mathbf{E} &= \hat{\mathbf{R}} \times \left( - \frac{\partial {t_r}}{\partial {t}} \left( \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t_r}} - \frac{\hat{\mathbf{R}}}{c} \frac{\partial {\phi}}{\partial {t_r}} \right) \right) \\ &= \hat{\mathbf{R}} \times \left( - \frac{\partial {t_r}}{\partial {t}} \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t_r}} \right)\end{aligned}

This equals 2.40, verifying that we have

\begin{aligned}\mathbf{B} = \hat{\mathbf{R}} \times \mathbf{E},\end{aligned} \hspace{\stretch{1}}(2.41)

something that we can determine even without fully evaluating \mathbf{E}.

We are now left to evaluate the retarded time derivatives found in 2.39. Our potentials are

\begin{aligned}\phi(\mathbf{x}, t) &= \frac{e}{R(t_r)} \frac{\partial {t_r}}{\partial {t}} \\ \mathbf{A}(\mathbf{x}, t) &= \frac{e \mathbf{v}_c(t_r)}{c R(t_r)} \frac{\partial {t_r}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(2.42)

It’s clear that the quantity {\partial {t_r}}/{\partial {t}} is going to show up all over the place, so let’s label it \gamma_{t_r}. This is justified by comparing to a particle’s boosted rest frame worldline

\begin{aligned}\begin{bmatrix}c t' \\ x'\end{bmatrix}=\gamma\begin{bmatrix}1 & -\beta \\ -\beta & 1\end{bmatrix}\begin{bmatrix}c t \\ 0\end{bmatrix}= \begin{bmatrix}\gamma c t \\ -\gamma \beta c t\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.44)

where we have {\partial {t'}}/{\partial {t}} = \gamma, so for the remainder of this part of this problem we’ll write

\begin{aligned}\gamma_{t_r} \equiv \frac{\partial {t_r}}{\partial {t}} = \frac{1}{{ 1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} }}.\end{aligned} \hspace{\stretch{1}}(2.45)

Using primes to denote partial derivatives with respect to the retarded time t_r we have

\begin{aligned}\phi' &= e \left( -\frac{R'}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right) \\ \mathbf{A}' &= e \frac{\mathbf{v}_c}{c} \left( -\frac{R'}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right)+ e \frac{\mathbf{a}_c}{c} \frac{\gamma_{t_r}}{R},\end{aligned} \hspace{\stretch{1}}(2.46)

so the electric field is

\begin{aligned}\mathbf{E} &= - \gamma_{t_r} \left( \frac{1}{{c}} \mathbf{A}' - \frac{\hat{\mathbf{R}}}{c} \phi' \right) \\ &= - \frac{e \gamma_{t_r}}{c} \left( \frac{\mathbf{v}_c}{c} \left( -\frac{R'}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right)+ \frac{\mathbf{a}_c}{c} \frac{\gamma_{t_r}}{R}- \hat{\mathbf{R}} \left( -\frac{R'}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right) \right) \\ &= - \frac{e \gamma_{t_r}}{c} \left( \frac{\mathbf{v}_c}{c} \left( \frac{c}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right)+ \frac{\mathbf{a}_c}{c} \frac{\gamma_{t_r}}{R}- \hat{\mathbf{R}} \left( \frac{c}{R^2} \gamma_{t_r} + \frac{\gamma_{t_r}'}{R} \right) \right) \\ &= - \frac{e \gamma_{t_r}}{c R} \left( \gamma_{t_r} \left( \frac{\mathbf{a}_c}{c} +\frac{\mathbf{v}_c}{R} - \frac{\hat{\mathbf{R}} c}{R}\right)+ \gamma_{t_r}' \left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right).\end{aligned}

Here’s where things get slightly messy.

\begin{aligned}\gamma_{t_r}' &= \frac{\partial {}}{\partial {t_r}} \frac{1}{{1 - \frac{\mathbf{v}_c}{c} \cdot \hat{\mathbf{R}} }} \\ &= -\gamma_{t_r}^2 \frac{\partial {}}{\partial {t_r}} \left( 1 - \frac{\mathbf{v}_c}{c} \cdot \hat{\mathbf{R}} \right) \\ &= \gamma_{t_r}^2 \left( \frac{\mathbf{a}_c}{c} \cdot \hat{\mathbf{R}} + \frac{\mathbf{v}_c}{c} \cdot \hat{\mathbf{R}}' \right),\end{aligned}

and messier

\begin{aligned}\hat{\mathbf{R}}' &= \frac{\partial {}}{\partial {t_r}} \frac{ \mathbf{R} }{ R } \\ &= \frac{ \mathbf{R}' }{ R } - \frac{\mathbf{R} R'}{R^2} \\ &= -\frac{ \mathbf{v}_c }{ R } - \frac{\hat{\mathbf{R}} (-c)}{R} \\ &= \frac{1}{{R}} \left( -\mathbf{v}_c + c \hat{\mathbf{R}} \right),\end{aligned}

then a bit unmessier

\begin{aligned}\gamma_{t_r}'&= \gamma_{t_r}^2 \left( \frac{\mathbf{a}_c}{c} \cdot \hat{\mathbf{R}} + \frac{\mathbf{v}_c}{c} \cdot \hat{\mathbf{R}}' \right) \\ &= \gamma_{t_r}^2 \left( \frac{\mathbf{a}_c}{c} \cdot \hat{\mathbf{R}} + \frac{\mathbf{v}_c}{c R} \cdot (-\mathbf{v}_c + c \hat{\mathbf{R}}) \right) \\ &= \gamma_{t_r}^2 \left( \hat{\mathbf{R}} \cdot \left( \frac{\mathbf{a}_c}{c} + \frac{\mathbf{v}_c}{R} \right) - \frac{\mathbf{v}_c^2}{c R} \right).\end{aligned}

Now we are set to plug this back into our electric field expression and start grouping terms

\begin{aligned}\mathbf{E}&= - \frac{e \gamma_{t_r}^2}{c R} \left( \frac{\mathbf{a}_c}{c} +\frac{\mathbf{v}_c}{R} - \frac{\hat{\mathbf{R}} c}{R}+ \gamma_{t_r} \left( \hat{\mathbf{R}} \cdot \left( \frac{\mathbf{a}_c}{c} + \frac{\mathbf{v}_c}{R} \right) - \frac{\mathbf{v}_c^2}{c R} \right)\left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) \\ &= - \frac{e \gamma_{t_r}^3}{c R} \left( \left(\frac{\mathbf{a}_c}{c} +\frac{\mathbf{v}_c}{R} - \frac{\hat{\mathbf{R}} c}{R}\right) \left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+ \left( \hat{\mathbf{R}} \cdot \left( \frac{\mathbf{a}_c}{c} + \frac{\mathbf{v}_c}{R} \right) - \frac{\mathbf{v}_c^2}{c R} \right)\left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) \\ &= - \frac{e \gamma_{t_r}^3}{c^2 R} \left( \mathbf{a}_c\left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+\hat{\mathbf{R}} \cdot \mathbf{a}_c \left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) \\ &\quad - \frac{e \gamma_{t_r}^3}{c R} \left( \left(\frac{\mathbf{v}_c}{R} - \frac{\hat{\mathbf{R}} c}{R}\right) \left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+ \left( \hat{\mathbf{R}} \cdot \left( \frac{\mathbf{v}_c}{R} \right) - \frac{\mathbf{v}_c^2}{c R} \right)\left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) \\ \end{aligned}


\begin{aligned}\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b} (\mathbf{a} \cdot \mathbf{c}) - \mathbf{c} (\mathbf{a} \cdot \mathbf{b})\end{aligned} \hspace{\stretch{1}}(2.48)

We can verify that

\begin{aligned}- \left( \mathbf{a}_c\left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+\hat{\mathbf{R}} \cdot \mathbf{a}_c \left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) &=-\mathbf{a}_c + \mathbf{a} \hat{\mathbf{R}} \cdot \frac{\mathbf{v}}{c} - \hat{\mathbf{R}} \cdot \mathbf{a}_c \frac{\mathbf{v}_c}{c} + \hat{\mathbf{R}} \cdot \mathbf{a}_c \hat{\mathbf{R}} \\ &= \hat{\mathbf{R}} \times \left( \left(\hat{\mathbf{R}} - \frac{\mathbf{v}_c}{c} \right) \times \mathbf{a}_c \right),\end{aligned}

which gets us closer to the desired end result

\begin{aligned}\mathbf{E}= \frac{e \gamma_{t_r}^3}{c^2 R} \hat{\mathbf{R}} \times \left( \left(\hat{\mathbf{R}} - \frac{\mathbf{v}_c}{c} \right) \times \mathbf{a}_c \right)- \frac{e \gamma_{t_r}^3}{c R^2} \left( \left(\mathbf{v}_c- \hat{\mathbf{R}} c\right) \left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+ \left( \hat{\mathbf{R}} \cdot \mathbf{v}_c - \frac{\mathbf{v}_c^2}{c} \right)\left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right).\end{aligned} \hspace{\stretch{1}}(2.49)

It is also easy to show that the remaining bit reduces nicely, since all the dot product terms conveniently cancel

\begin{aligned}- \left( \left(\mathbf{v}_c- \hat{\mathbf{R}} c\right) \left(1 -\hat{\mathbf{R}} \cdot \frac{\mathbf{v}_c}{c} \right)+ \left( \hat{\mathbf{R}} \cdot \mathbf{v}_c - \frac{\mathbf{v}_c^2}{c} \right)\left( \frac{\mathbf{v}_c}{c} - \hat{\mathbf{R}} \right)\right) = c \left( 1 - \frac{\mathbf{v}_c^2}{c^2} \right) \left( \hat{\mathbf{R}} - \frac{\mathbf{v}}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.50)

This completes the exercise, leaving us with

\begin{aligned}\boxed{\begin{aligned}\mathbf{E}&= \frac{e \gamma_{t_r}^3}{c^2 R} \hat{\mathbf{R}} \times \left( \left(\hat{\mathbf{R}} - \frac{\mathbf{v}_c}{c} \right) \times \mathbf{a}_c \right)+\frac{e \gamma_{t_r}^3}{R^2} \left( 1 - \frac{\mathbf{v}_c^2}{c^2} \right) \left( \hat{\mathbf{R}} - \frac{\mathbf{v}_c}{c} \right) \\ \mathbf{B} &= \hat{\mathbf{R}} \times \mathbf{E}.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(2.51)

Looking back to 2.45 where \gamma_{t_r} was defined, we see that this compares to (63.8-9) in the text.

2. Solution. EM fields from a uniformly moving source.

For a uniform source moving in space at constant velocity

\begin{aligned}\mathbf{x}_c(t) = \mathbf{v} t,\end{aligned} \hspace{\stretch{1}}(2.52)

our retarded time measured from the spacetime point (ct, \mathbf{x}) is defined implicitly by

\begin{aligned}R = {\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r) }\right\rvert} = c (t - t_r).\end{aligned} \hspace{\stretch{1}}(2.53)

Squaring this we have

\begin{aligned}\mathbf{x}^2 + \mathbf{v}^2 t_r^2 - 2 t_r \mathbf{x} \cdot \mathbf{v} = c^2 t^2 + c^2 t_r^2 - 2 c t t_r,\end{aligned} \hspace{\stretch{1}}(2.54)


\begin{aligned}( c^2 -\mathbf{v}^2) t_r^2 + 2 t_r ( - c t + \mathbf{x} \cdot \mathbf{v} ) = \mathbf{x}^2 - c^2 t^2.\end{aligned} \hspace{\stretch{1}}(2.55)

Rearranging to complete the square we have

\begin{aligned}\left( \sqrt{c^2 - \mathbf{v}^2} t_r - \frac{ t c^2 - \mathbf{x} \cdot \mathbf{v} }{\sqrt{c^2 - \mathbf{v}^2}} \right)^2 &= \mathbf{x}^2 - c^2 t^2 +\frac{ (t c^2 - \mathbf{x} \cdot \mathbf{v})^2 }{c^2 - \mathbf{v}^2} \\ &= \frac{ (\mathbf{x}^2 - c^2 t^2)( c^2 - \mathbf{v}^2) + (t c^2 - \mathbf{x} \cdot \mathbf{v})^2}{ c^2 - \mathbf{v}^2} \\ &= \frac{ \mathbf{x}^2 c^2 - \mathbf{x}^2 \mathbf{v}^2 - {c^4 t^2} + c^2 t^2 \mathbf{v}^2 + {t^2 c^4} + (\mathbf{x} \cdot \mathbf{v})^2 - 2 t c^2 (\mathbf{x} \cdot \mathbf{v}) }{ c^2 - \mathbf{v}^2 } \\ &= \frac{ c^2 ( \mathbf{x}^2 + t^2 \mathbf{v}^2 -2 t (\mathbf{x} \cdot \mathbf{v})) - \mathbf{x}^2 \mathbf{v}^2 + (\mathbf{x} \cdot \mathbf{v})^2 }{ c^2 - \mathbf{v}^2 } \\ &= \frac{ c^2 ( \mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \mathbf{v})^2 }{ c^2 - \mathbf{v}^2 } \\ \end{aligned}

Taking roots (and keeping the negative so that we have t_r = t - {\left\lvert{\mathbf{x}}\right\rvert}/c for the \mathbf{v} = 0 case, we have

\begin{aligned}\sqrt{1 - \frac{\mathbf{v}^2}{c^2}} c t_r =\frac{1}{{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}}} \left(c t - \mathbf{x} \cdot \frac{\mathbf{v}}{c} - \sqrt{ \left( \mathbf{x} - \mathbf{v} t \right)^2 - \left(\mathbf{x} \times \frac{\mathbf{v}}{c}\right)^2 }\right),\end{aligned} \hspace{\stretch{1}}(2.56)

or with \boldsymbol{\beta} = \mathbf{v}/c, this is

\begin{aligned}c t_r = \frac{1}{{1 - \boldsymbol{\beta}^2}} \left( c t - \mathbf{x} \cdot \boldsymbol{\beta} - \sqrt{ \left( \mathbf{x} - \mathbf{v} t \right)^2 - \left(\mathbf{x} \times \boldsymbol{\beta}\right)^2 } \right).\end{aligned} \hspace{\stretch{1}}(2.57)

What’s our retarded distance R = c t - c t_r? We get

\begin{aligned}R = \frac{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 }}{ 1 - \boldsymbol{\beta}^2 }.\end{aligned} \hspace{\stretch{1}}(2.58)

For the vector distance we get (with \boldsymbol{\beta} \cdot (\mathbf{x} \wedge \boldsymbol{\beta}) = (\boldsymbol{\beta} \cdot \mathbf{x}) \boldsymbol{\beta} - \mathbf{x} \boldsymbol{\beta}^2)

\begin{aligned}\mathbf{R} = \frac{\mathbf{x} -\mathbf{v} t + \boldsymbol{\beta} \cdot (\mathbf{x} \wedge \boldsymbol{\beta}) + \boldsymbol{\beta} \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 }}{ 1 - \boldsymbol{\beta}^2 }.\end{aligned} \hspace{\stretch{1}}(2.59)

For the unit vector \hat{\mathbf{R}} = \mathbf{R}/R we have

\begin{aligned}\hat{\mathbf{R}} = \frac{\mathbf{x} - \mathbf{v} t + \boldsymbol{\beta} \cdot (\mathbf{x} \wedge \boldsymbol{\beta}) + \boldsymbol{\beta} \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 }}{ \boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 } }.\end{aligned} \hspace{\stretch{1}}(2.60)

The acceleration term in the electric field is zero, so we are left with just

\begin{aligned}\mathbf{E}= \frac{e \gamma_{t_r}^3}{R^2} \left( 1 - \frac{\mathbf{v}_c^2}{c^2} \right) \left( \hat{\mathbf{R}} - \frac{\mathbf{v}_c}{c} \right).\end{aligned} \hspace{\stretch{1}}(2.61)

Leading to \gamma_{t_r}, we have

\begin{aligned}\hat{\mathbf{R}} \cdot \boldsymbol{\beta} = \frac{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t + R^{*} \boldsymbol{\beta})}{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + R^{*}},\end{aligned} \hspace{\stretch{1}}(2.62)

where, following section 38 of the text we write

\begin{aligned}R^{*} = \sqrt{(\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 }\end{aligned} \hspace{\stretch{1}}(2.63)

This gives us

\begin{aligned}\gamma_{t_r} = \frac{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + R^{*}}{R^{*}(1 - \boldsymbol{\beta}^2)}.\end{aligned} \hspace{\stretch{1}}(2.64)

Observe that this equals one when \boldsymbol{\beta} = 0 as expected.

We can also compute

\begin{aligned}\hat{\mathbf{R}} - \boldsymbol{\beta} &= \frac{\mathbf{x} + \boldsymbol{\beta} \cdot (\mathbf{x} \wedge \boldsymbol{\beta}) - \mathbf{v} t + \boldsymbol{\beta} \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 }}{ \boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 } } - \boldsymbol{\beta} \\ &=\frac{(\mathbf{x} - \mathbf{v} t)(1 -\boldsymbol{\beta}^2)}{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + \sqrt{ (\mathbf{x} - \mathbf{v} t)^2 - (\mathbf{x} \times \boldsymbol{\beta})^2 } }.\end{aligned}

Our long and messy expression for the field is therefore

\begin{aligned}\mathbf{E} &=e \gamma_{t_r}^3 \frac{1}{{R^2}} (1 - \boldsymbol{\beta}^2)(\hat{\mathbf{R}} - \boldsymbol{\beta}) \\ &=e \left( \frac{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + R^{*}}{R^{*}(1 - \boldsymbol{\beta}^2)}\right)^3\frac{(1 - \boldsymbol{\beta}^2)^2 }{ (\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + R^{*})^2 } (1 -\boldsymbol{\beta}^2)\frac{(\mathbf{x} - \mathbf{v} t)(1 -\boldsymbol{\beta}^2)}{\boldsymbol{\beta} \cdot (\mathbf{x} - \mathbf{v} t) + R^{*}} \\ \end{aligned}

This gives us our final result

\begin{aligned}\mathbf{E} =e \frac{1}{{(R^{*})^3}}(1 -\boldsymbol{\beta}^2)(\mathbf{x} - \mathbf{v} t)\end{aligned} \hspace{\stretch{1}}(2.65)

As a small test we observe that we get the expected result

\begin{aligned}\mathbf{E} = e \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3}\end{aligned} \hspace{\stretch{1}}(2.66)

for the \boldsymbol{\beta} = 0 case.

When \mathbf{v} = V \mathbf{e}_1 this also recovers equation (38.6) from the text as desired, and if we switch to primed coordinates

\begin{aligned}x' &= \gamma( x - v t) \\ y' &= y \\ z' &= z \\ (1 - \beta^2) {r'}^2 &= (x - v t)^2 + (y^2 + z^2)(1 - \beta^2),\end{aligned} \hspace{\stretch{1}}(2.67)

we recover the field equation derived twice before in previous problem sets

\begin{aligned}\mathbf{E} = \frac{e}{(r')^3} ( x', \gamma y', \gamma z')\end{aligned} \hspace{\stretch{1}}(2.71)

2. Solution. EM fields from a uniformly moving source along x axis.

Initially I had errors in the vector treatment above, so tried with the simpler case using uniform velocity v along the x axis instead. Comparison of the two showed where my errors were in the vector algebra, and that’s now also fixed up.

Performing all the algebra to solve for t_r in

\begin{aligned}{\left\lvert{\mathbf{x} - v t_r \mathbf{e}_1}\right\rvert} = c(t - t_r),\end{aligned} \hspace{\stretch{1}}(2.72)

I get

\begin{aligned}c t_r = \frac{c t - x \beta - \sqrt{ (x- v t)^2 + (y^2 + z^2)(1-\beta^2) } }{ 1 - \beta^2 } = - \gamma (\beta x' + r' )\end{aligned} \hspace{\stretch{1}}(2.73)

This matches the vector expression from 2.57 with the special case of \mathbf{v} = v \mathbf{e}_1 so we at least started off on the right foot.

For the retarded distance R = ct - c t_r we get

\begin{aligned}R = \frac{ \beta( x - v t) + \sqrt{ (x- v t)^2 + (y^2 + z^2)(1-\beta^2) } }{ 1 - \beta^2 } = \gamma( \beta x' + r' )\end{aligned} \hspace{\stretch{1}}(2.74)

This also matches 2.58, so things still seem okay with the vector approach. What’s our vector retarded distance

\begin{aligned}\mathbf{R} &= \mathbf{x} - \beta c t_r \mathbf{e}_1 \\ &= (x - \beta c t_r, y, z) \\ &= \left( \frac{ x - v t + \beta \sqrt{ (x- v t)^2 + (y^2 + z^2)(1-\beta^2) } }{ 1 - \beta^2 }, y, z \right) \\ &= \left( \gamma (x' + \beta r'), y', z' \right)\end{aligned}


\begin{aligned}\hat{\mathbf{R}} &= \frac{1}{{ \gamma (\beta x' + r') }} \left( \gamma(x' + \beta r'), y', z' \right) \\ &= \frac{1}{{ \beta x' + r' }} \left( x' + \beta r', \frac{y'}{\gamma}, \frac{z'}{\gamma} \right)\end{aligned}

\begin{aligned}\hat{\mathbf{R}} -\boldsymbol{\beta}&= \frac{1}{{ \gamma (\beta x' + r') }} \left( \gamma(x' + \beta r'), y', z' \right) - (\beta, 0, 0) \\ &= \frac{1}{{ \beta x' + r' }} \left( x'(1- \beta^2), \frac{y'}{\gamma}, \frac{z'}{\gamma} \right) \\ &= \frac{1}{{\gamma (\beta x' + r')}} ( x - v t, y, z)\end{aligned}

For {\partial {t_r}}/{\partial {t}}, using \hat{\mathbf{R}} calculated above, or from 2.73 calculating directly I get

\begin{aligned}\frac{\partial {t_r}}{\partial {t}} = \frac{r' + \beta x'}{r'(1 - \beta^2)} = \frac{\gamma( r' + \beta x') }{R^{*}},\end{aligned} \hspace{\stretch{1}}(2.75)

where, as in section 38 of the text, we write

\begin{aligned}R^{*} = \sqrt{ (x - v t)^2 + (y^2 + z^2)(1-\beta^2) }.\end{aligned} \hspace{\stretch{1}}(2.76)

Putting all the pieces together I get

\begin{aligned}\mathbf{E} &= e (1 -\beta^2) \frac{(x - v t, y, z)}{{\gamma( \beta x' + r'})} \left( \frac{{\gamma( r' + \beta x')}}{R^{*}} \right)^3 \frac{1}{{{ \gamma^2 (\beta x' + r')^2 } }} \\ \end{aligned}

so we have

\begin{aligned}\mathbf{E} = e \frac{1 -\beta^2}{(R^{*})^3} (x - v t, y, z) \end{aligned} \hspace{\stretch{1}}(2.77)

This matches equation (38.6) in the text.

Problem 3.


Grading notes.

Only the first question was graded (I lost 1.5 marks). I got my units wrong when I integrated 1.11, and my \omega/c factor in the result is circled. Should have also done a dimensional analysis check. There was also a remark that the integral is zero if t < y/c, which introduces a \theta function. I think that I incorrectly dropped my \theta function, and should have retained.

FIXME: Revisit both of these and make sure to understand where exactly I went wrong.


[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

PHY450H1S, Relativistic Electrodynamics, Problem Set 4.

Posted by peeterjoot on March 17, 2011

[Click here for a PDF of this post with nicer formatting]

Problem 1. Energy, momentum, etc., of EM waves.


\item Calculate the energy density, energy flux, and momentum density of a plane monochromatic linearly polarized electromagnetic wave.
\item Calculate the values of these quantities averaged over a period.
\item Imagine that a plane monochromatic linearly polarized wave incident on a surface (let the angle between the wave vector and the normal to the surface be \theta) is completely reflected. Find the pressure that the EM wave exerts on the surface.
\item To plug in some numbers, note that the intensity of sunlight hitting the Earth is about 1300 W/m^2 ( the intensity is the average power per unit area transported by the wave). If sunlight strikes a perfect absorber, what is the pressure exerted? What if it strikes a perfect reflector? What fraction of the atmospheric pressure does this amount to?


Part 1. Energy and momentum density.

Because it doesn’t add too much complexity, I’m going to calculate these using the more general elliptically polarized wave solutions. Our vector potential (in the Coulomb gauge \phi = 0, \boldsymbol{\nabla} \cdot \mathbf{A} = 0) has the form

\begin{aligned}\mathbf{A} = \text{Real} \boldsymbol{\beta} e^{i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.1)

The elliptical polarization case only differs from the linear by allowing \boldsymbol{\beta} to be complex, rather than purely real or purely imaginary. Observe that the Coulomb gauge condition \boldsymbol{\nabla} \cdot \mathbf{A} implies

\begin{aligned}\boldsymbol{\beta} \cdot \mathbf{k} = 0,\end{aligned} \hspace{\stretch{1}}(1.2)

a fact that will kill of terms in a number of places in the following manipulations.

Also observe that for this to be a solution to the wave equation operator

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta,\end{aligned} \hspace{\stretch{1}}(1.3)

the frequency and wave vector must be related by the condition

\begin{aligned}\frac{\omega}{c} = {\left\lvert{\mathbf{k}}\right\rvert} = k.\end{aligned} \hspace{\stretch{1}}(1.4)

For the time and spatial phase let’s write

\begin{aligned}\theta = \omega t - \mathbf{k} \cdot \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.5)

In the Coulomb gauge, our electric and magnetic fields are

\begin{aligned}\mathbf{E} &= -\frac{1}{{c}}\frac{\partial {\mathbf{A}}}{\partial {t}} = \text{Real} \frac{-i\omega}{c} \boldsymbol{\beta} e^{i\theta} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A} = \text{Real} i \boldsymbol{\beta} \times \mathbf{k} e^{i\theta}\end{aligned} \hspace{\stretch{1}}(1.6)

Similar to \S 48 of the text [1], let’s split \boldsymbol{\beta} into a phase and perpendicular vector components so that

\begin{aligned}\boldsymbol{\beta} = \mathbf{b} e^{-i\alpha}\end{aligned} \hspace{\stretch{1}}(1.8)

where \mathbf{b} has a real square

\begin{aligned}\mathbf{b}^2 = {\left\lvert{\boldsymbol{\beta}}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(1.9)

This allows a split into two perpendicular real vectors

\begin{aligned}\mathbf{b} = \mathbf{b}_1 + i \mathbf{b}_2,\end{aligned} \hspace{\stretch{1}}(1.10)

where \mathbf{b}_1 \cdot \mathbf{b}_2 = 0 since \mathbf{b}^2 = \mathbf{b}_1^2 - \mathbf{b}_2^2 + 2 \mathbf{b}_1 \cdot \mathbf{b}_2 is real.

Our electric and magnetic fields are now reduced to

\begin{aligned}\mathbf{E} &= \text{Real} \left( \frac{-i\omega}{c} \mathbf{b} e^{i(\theta - \alpha)} \right) \\ \mathbf{B} &= \text{Real} \left( i \mathbf{b} \times \mathbf{k} e^{i(\theta - \alpha)} \right) \end{aligned} \hspace{\stretch{1}}(1.11)

or explicitly in terms of \mathbf{b}_1 and \mathbf{b}_2

\begin{aligned}\mathbf{E} &= \frac{\omega}{c} ( \mathbf{b}_1 \sin(\theta-\alpha) + \mathbf{b}_2 \cos(\theta-\alpha)) \\ \mathbf{B} &= ( \mathbf{k} \times \mathbf{b}_1 ) \sin(\theta-\alpha) + (\mathbf{k} \times \mathbf{b}_2) \cos(\theta-\alpha) \end{aligned} \hspace{\stretch{1}}(1.13)

The special case of interest for this problem, since it only strictly asked for linear polarization, is where \alpha = 0 and one of \mathbf{b}_1 or \mathbf{b}_2 is zero (i.e. \boldsymbol{\beta} is strictly real or strictly imaginary). The case with \Beta strictly real, as done in class, is

\begin{aligned}\mathbf{E} &= \frac{\omega}{c} \mathbf{b}_1 \sin(\theta-\alpha) \\ \mathbf{B} &= ( \mathbf{k} \times \mathbf{b}_1 ) \sin(\theta-\alpha) \end{aligned} \hspace{\stretch{1}}(1.15)

Now lets calculate the energy density and Poynting vectors. We’ll need a few intermediate results.

\begin{aligned}(\text{Real} \mathbf{d} e^{i\phi})^2 &= \frac{1}{{4}} ( \mathbf{d} e^{i\phi} + \mathbf{d}^{*} e^{-i\phi})^2 \\ &= \frac{1}{{4}} ( \mathbf{d}^2 e^{2 i \phi} + (\mathbf{d}^{*})^2 e^{-2 i \phi} + 2 {\left\lvert{\mathbf{d}}\right\rvert}^2 ) \\ &= \frac{1}{{2}} \left( {\left\lvert{\mathbf{d}}\right\rvert}^2 + \text{Real} ( \mathbf{d} e^{i \phi} )^2 \right),\end{aligned}


\begin{aligned}(\text{Real} \mathbf{d} e^{i\phi}) \times (\text{Real} \mathbf{e} e^{i\phi}) &= \frac{1}{{4}} ( \mathbf{d} e^{i\phi} + \mathbf{d}^{*} e^{-i\phi}) \times ( \mathbf{e} e^{i\phi} + \mathbf{e}^{*} e^{-i\phi}) \\ &= \frac{1}{{2}} \text{Real} \left( \mathbf{d} \times \mathbf{e}^{*} + (\mathbf{d} \times \mathbf{e}) e^{2 i \phi} \right).\end{aligned}

Let’s use arrowed vectors for the phasor parts

\begin{aligned}\vec{E} &= \frac{-i\omega}{c} \mathbf{b} e^{i(\theta - \alpha)} \\ \vec{B} &= i \mathbf{b} \times \mathbf{k} e^{i(\theta - \alpha)},\end{aligned} \hspace{\stretch{1}}(1.17)

where we can recover our vector quantities by taking real parts \mathbf{E} = \text{Real} \vec{E}, \mathbf{B} = \text{Real} \vec{B}. Our energy density in terms of these phasors is then

\begin{aligned}\mathcal{E} = \frac{1}{{8\pi}} (\mathbf{E}^2 + \mathbf{B}^2)= \frac{1}{{16\pi}} \left( {\left\lvert{\vec{E}}\right\rvert}^2 + {\left\lvert{\vec{B}}\right\rvert}^2 + \text{Real} ({\vec{E}}^2 + {\vec{B}}^2) \right).\end{aligned} \hspace{\stretch{1}}(1.19)

This is

\begin{aligned}\mathcal{E} &=\frac{1}{{16\pi}}\left(\frac{\omega^2}{c^2} {\left\lvert{\mathbf{b}}\right\rvert}^2 + {\left\lvert{\mathbf{b} \times \mathbf{k}}\right\rvert}^2-\text{Real} \left(\frac{\omega^2}{c^2} \mathbf{b}^2 + (\mathbf{b} \times \mathbf{k})^2\right)e^{2 i(\theta - \alpha)} \right).\end{aligned}

Note that \omega^2/c^2 = \mathbf{k}^2, and {\left\lvert{\mathbf{b} \times \mathbf{k}}\right\rvert} = {\left\lvert{\mathbf{b}}\right\rvert}^2 \mathbf{k}^2 (since \mathbf{b} \cdot \mathbf{k} = 0). Also (\mathbf{b} \times \mathbf{k})^2 = \mathbf{b}^2 \mathbf{k}^2, so we have

\begin{aligned}\boxed{\mathcal{E} =\frac{ \mathbf{k}^2 }{8\pi}\left({\left\lvert{\mathbf{b}}\right\rvert}^2 -\text{Real} \mathbf{b}^2 e^{2 i(\theta - \alpha)} \right).}\end{aligned} \hspace{\stretch{1}}(1.20)

Now, for the Poynting vector. We have

\begin{aligned}S = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} = \frac{c}{8 \pi} \text{Real} \left( \vec{E} \times \vec{B}^{*} + \vec{E} \times \vec{B} \right).\end{aligned} \hspace{\stretch{1}}(1.21)

This is

\begin{aligned}S &= \frac{c}{8 \pi} \text{Real} \left( -k \mathbf{b} \times (\mathbf{b}^{*} \times \mathbf{k}) + k \mathbf{b} \times (\mathbf{b} \times \mathbf{k} ) e^{2 i(\theta - \alpha)} \right) \\ \end{aligned}

Reducing the terms we get \mathbf{b} \times (\mathbf{b}^{*} \times \mathbf{k}) = -\mathbf{k} {\left\lvert{\mathbf{b}}\right\rvert}^2, and \mathbf{b} \times (\mathbf{b} \times \mathbf{k}) = -\mathbf{k} \mathbf{b}^2, leaving

\begin{aligned}\boxed{S = \frac{c \hat{\mathbf{k}} \mathbf{k}^2 }{8 \pi} \left( {\left\lvert{\mathbf{b}}\right\rvert}^2 - \text{Real} \mathbf{b}^2 e^{2 i(\theta - \alpha)} \right) = c \hat{\mathbf{k}} \mathcal{E}}\end{aligned} \hspace{\stretch{1}}(1.22)

Now, the text in \S 47 defines the energy flux as the Poynting vector, and the momentum density as \mathbf{S}/c^2, so we just divide 1.22 by c^2 for the momentum density and we are done. For the linearly polarized case (all that was actually asked for, but less cool to calculate), where \mathbf{b} is real, we have

\begin{aligned}\mbox{Energy density} &= \mathcal{E} = \frac{ \mathbf{k}^2 \mathbf{b}^2 }{8\pi} ( 1 - \cos( 2 (\omega t - \mathbf{k} \cdot \mathbf{x})) ) \\ \mbox{Energy flux} &= \mathbf{S} = c \hat{\mathbf{k}} \mathcal{E} \\ \mbox{Momentum density} &= \frac{1}{{c^2}} \mathbf{S} = \frac{\hat{\mathbf{k}}}{c} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.23)

Part 2. Averaged.

We want to average over one period, the time T such that \omega T = 2 \pi, so the average is

\begin{aligned}\left\langle{{f}}\right\rangle = \frac{\omega}{2\pi} \int_0^{2\pi/\omega} f dt.\end{aligned} \hspace{\stretch{1}}(1.26)

It is clear that this will just kill off the sinusoidal terms, leaving

\begin{aligned}\mbox{Average Energy density} &= \left\langle{{\mathcal{E}}}\right\rangle = \frac{ \mathbf{k}^2 {\left\lvert{\mathbf{b}}\right\rvert}^2 }{8\pi} \\ \mbox{Average Energy flux} &= \left\langle{\mathbf{S}}\right\rangle = c \hat{\mathbf{k}} \mathcal{E} \\ \mbox{Average Momentum density} &= \frac{1}{{c^2}} \left\langle{\mathbf{S}}\right\rangle = \frac{\hat{\mathbf{k}}}{c} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.27)

Part 3. Pressure.

The magnitude of the momentum of light is related to its energy by

\begin{aligned}\mathbf{p} = \frac{\mathcal{E}}{c}\end{aligned} \hspace{\stretch{1}}(1.30)

and can thus loosely identify the magnitude of the force as

\begin{aligned}\frac{d{\mathbf{p}}}{dt} &= \frac{1}{{c}} \frac{\partial {}}{\partial {t}} \int \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} d^3 \mathbf{x} \\ &= \int d^2 \boldsymbol{\sigma} \cdot \frac{\mathbf{S}}{c}.\end{aligned}

With pressure as the force per area, we could identify

\begin{aligned}\frac{\mathbf{S}}{c}\end{aligned} \hspace{\stretch{1}}(1.31)

as the instantaneous (directed) pressure on a surface. What is that for linearly polarized light? We have from above for the linear polarized case (where {\left\lvert{\mathbf{b}}\right\rvert}^2 = \mathbf{b}^2)

\begin{aligned}\mathbf{S} = \frac{c \hat{\mathbf{k}} \mathbf{k}^2 \mathbf{b}^2 }{8 \pi} ( 1 - \cos( 2 (\omega t - \mathbf{k} \cdot \mathbf{x}) ) )\end{aligned} \hspace{\stretch{1}}(1.32)

If we look at the magnitude of the average pressure from the radiation, we have

\begin{aligned}{\left\lvert{\frac{\left\langle{\mathbf{S}}\right\rangle}{c}}\right\rvert} = \frac{\mathbf{k}^2 \mathbf{b}^2 }{8 \pi}.\end{aligned} \hspace{\stretch{1}}(1.33)

Part 4. Sunlight.

With atmospheric pressure at 101.3 k Pa, and the pressure from the light at 1300 W/ 3 x 10^8 m/s, we have roughly 4 x 10^-5 Pa of pressure from the sunlight being only \sim 10^-{10} of the total atmospheric pressure. Wow. Very tiny!

Would it make any difference if the surface is a perfect absorber or a reflector? Consider a ball hitting a wall. If it manages to embed itself in the wall, the wall will have to move a bit to conserve momentum. However, if the ball bounces off twice the momentum has been transferred to the wall. The numbers above would be for perfect absorbtion, so double them for a perfect reflector.

Problem 2. Spherical EM waves.


Suppose you are given:

\begin{aligned}\vec{E}(r, \theta, \phi, t) = A \frac{\sin\theta}{r} \left( \cos(k r - \omega t) - \frac{1}{{k r}} \sin(k r - \omega t) \right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.34)

where \omega = k/c and \hat{\boldsymbol{\phi}} is the unit vector in the \phi-direction. This is a simple example of a spherical wave.

\item Show that \vec{E} obeys all four Maxwell equations in vacuum and find the associated magnetic field.
\item Calculate the Poynting vector. Average \vec{S} over a full cycle to get the intensity vector \vec{I} \equiv \left\langle{{\vec{S}}}\right\rangle. Where does it point to? How does it depend on r?
\item Integrate the intensity vector flux through a spherical surface centered at the origin to find the total power radiated.


Part 1. Maxwell equation verification and magnetic field.

Our vacuum Maxwell equations to verify are

\begin{aligned}\nabla \cdot \vec{E} &= 0 \\ \nabla \times \vec{B} -\frac{1}{{c}} \frac{\partial {\vec{E}}}{\partial {t}} &= 0 \\ \nabla \cdot \vec{B} &= 0 \\ \nabla \times \vec{E} +\frac{1}{{c}} \frac{\partial {\vec{B}}}{\partial {t}} &= 0.\end{aligned} \hspace{\stretch{1}}(2.35)

We’ll also need the spherical polar forms of the divergence and curl operators, as found in \S 1.4 of [2]

\begin{aligned}\nabla \cdot \vec{v} &=\frac{1}{{r^2}} \partial_r ( r^2 v_r )+ \frac{1}{{r\sin\theta}} \partial_\theta (\sin\theta v_\theta)+ \frac{1}{{r\sin\theta}} \partial_\phi v_\phi \\ \nabla \times \vec{v} &=\frac{1}{{r \sin\theta}} \left(\partial_\theta (\sin\theta v_\phi) - \partial_\phi v_\theta\right) \hat{\mathbf{r}}+\frac{1}{{r }} \left(\frac{1}{{\sin\theta}} \partial_\phi v_r - \partial_r (r v_\phi)\right) \hat{\boldsymbol{\theta}}+\frac{1}{{r }} \left(\partial_r (r v_\theta) - \partial_\theta v_r\right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.39)

We can start by verifying the divergence equation for the electric field. Observe that our electric field has only an E_\phi component, so our divergence is

\begin{aligned}\nabla \cdot \vec{E}=\frac{1}{{r\sin\theta}} \partial_\phi \left(A \frac{\sin\theta}{r} \left( \cos(k r - \omega t) - \frac{1}{{k r}} \sin(k r - \omega t) \right) \right) = 0.\end{aligned} \hspace{\stretch{1}}(2.41)

We have a zero divergence since the component E_\phi has no \phi dependence (whereas \vec{E} itself does since the unit vector \hat{\boldsymbol{\phi}} = \hat{\boldsymbol{\phi}}(\phi)).

All of the rest of Maxwell’s equations require \vec{B} so we’ll have to first calculate that before progressing further.

A aside on approaches attempted to find \vec{B}

I tried two approaches without success to calculate \vec{B}. First I hoped that I could just integrate -\vec{E} to obtain \vec{A} and then take the curl. Doing so gave me a result that had \nabla \times \vec{B} \ne 0. I hunted for an algebraic error that would account for this, but could not find one.

The second approach that I tried, also without success, was to simply take the cross product \hat{\mathbf{r}} \times \vec{E}. This worked in the monochromatic plane wave case where we had

\begin{aligned}\vec{B} &= (\vec{k} \times \vec{\beta}) \sin(\omega t - \vec{k} \cdot \vec{x}) \\ \vec{E} &= \vec{\beta} {\left\lvert{\vec{k}}\right\rvert} \sin(\omega t - \vec{k} \cdot \vec{x})\end{aligned} \hspace{\stretch{1}}(2.42)

since one can easily show that \vec{B} = \vec{k} \times \vec{E}. Again, I ended up with a result for \vec{B} that did not have a zero divergence.

Finding \vec{B} with a more systematic approach.

Following [3] \S 16.2, let’s try a phasor approach, assuming that all the solutions, whatever they are, have all the time dependence in a e^{-i\omega t} term.

Let’s write our fields as

\begin{aligned}\vec{E} &= \text{Real} (\mathbf{E} e^{-i \omega t}) \\ \vec{B} &= \text{Real} (\mathbf{B} e^{-i \omega t}).\end{aligned} \hspace{\stretch{1}}(2.44)

Substitution back into Maxwell’s equations thus requires equality in the real parts of

\begin{aligned}\nabla \cdot \mathbf{E} &= 0 \\ \nabla \cdot \mathbf{B} &= 0 \\ \nabla \times \mathbf{B} &= - i \frac{\omega}{c} \mathbf{E} \\ \nabla \times \mathbf{E} &= i \frac{\omega}{c} \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.46)

With k = \omega/c we can now directly compute the magnetic field phasor

\begin{aligned}\mathbf{B} = -\frac{i}{k} \nabla \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(2.50)

The electric field of this problem can be put into phasor form by noting

\begin{aligned}\vec{E} = A \frac{\sin\theta}{r} \text{Real} \left( e^{i (k r - \omega t)} - \frac{i}{k r} e^{i(k r - \omega t)} \right) \hat{\boldsymbol{\phi}},\end{aligned} \hspace{\stretch{1}}(2.51)

which allows for reading off the phasor part directly

\begin{aligned}\mathbf{E} = A \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.52)

Now we can compute the magnetic field phasor \mathbf{B}. Since we have only a \phi component in our field, the curl will have just \hat{\mathbf{r}} and \hat{\boldsymbol{\theta}} components. This is reasonable since we expect it to be perpendicular to \mathbf{E}.

\begin{aligned}\nabla \times (v_\phi \hat{\boldsymbol{\phi}}) = \frac{1}{{r \sin\theta}} \partial_\theta (\sin\theta v_\phi) \hat{\mathbf{r}}- \frac{1}{{r }} \partial_r (r v_\phi) \hat{\boldsymbol{\theta}}.\end{aligned} \hspace{\stretch{1}}(2.53)

Chugging through all the algebra we have

\begin{aligned}i k \mathbf{B} &=\nabla \times \mathbf{E} \\ &=\frac{2 A \cos\theta}{r^2} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r } \frac{\partial {}}{\partial {r}} \left( \left( 1 - \frac{i}{k r} \right) e^{i k r} \right)\hat{\boldsymbol{\theta}} \\ &=\frac{2 A \cos\theta}{r^2} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r } \left( i k + \frac{1}{{r}} + \frac{i}{k r^2} \right) e^{i k r} \hat{\boldsymbol{\theta}},\end{aligned}

so our magnetic phasor is

\begin{aligned}\mathbf{B} =\frac{2 A \cos\theta}{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.54)

Multiplying by e^{-i\omega t} and taking real parts gives us the messy magnetic field expression

\begin{aligned}\begin{aligned}\vec{B} &=\frac{A}{r} \frac{2 \cos\theta}{k r} \left( \sin(k r - \omega t)- \frac{1}{k r} \cos(k r - \omega t) \right)\hat{\mathbf{r}} \\ &- \frac{A}{r} \frac{\sin\theta}{k r}\left(\sin(k r - \omega t)+ \frac{k^2 r^2 + 1}{k r}\cos(k r - \omega t)\right)\hat{\boldsymbol{\theta}}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.55)

Since this was constructed directly from \nabla \times \vec{E} +\frac{1}{{c}} {\partial {\vec{B}}}/{\partial {t}} = 0, this implicitly verifies one more of Maxwell’s equations, leaving only \nabla \cdot \vec{B}, and \nabla \times \vec{B} -\frac{1}{{c}} {\partial {\vec{E}}}/{\partial {t}} = 0. Neither of these looks particularly fun to verify, however, we can take a small shortcut and use the phasors to verify without the explicit time dependence.

From 2.54 we have for the divergence

\begin{aligned}\nabla \cdot \mathbf{B} &=\frac{2 A \cos\theta}{k r^2 } \frac{\partial {}}{\partial {r}} \left(\left( -i - \frac{1}{k r} \right) e^{i k r} \right)- \frac{A 2 \cos\theta}{r^2} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r}  \\ &=\frac{2 A \cos\theta}{r^2 } e^{i k r}\left(\frac{1}{{k}}\left( \frac{1}{{k r^2}} + i k \left(-i - \frac{1}{{k r}}\right)\right)-\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \right) \\ &= 0 \qquad \square\end{aligned}

Let’s also verify the last of Maxwell’s equations in phasor form. The time dependence is knocked out, and we want to see that taking the curl of the magnetic phasor returns us (scaled) the electric phasor. That is

\begin{aligned}\nabla \times \mathbf{B} = - i \frac{\omega}{c} \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.56)

With only r and \theta components in the magnetic phasor we have

\begin{aligned}\nabla \times (v_r \hat{\mathbf{r}} + v_\theta \hat{\boldsymbol{\theta}}) =-\frac{1}{{r \sin\theta}} \partial_\phi v_\theta\hat{\mathbf{r}}+\frac{1}{{r }} \frac{1}{{\sin\theta}} \partial_\phi v_r \hat{\boldsymbol{\theta}}+\frac{1}{{r }} \left(\partial_r (r v_\theta) - \partial_\theta v_r\right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.57)

Immediately, we see that with no explicit \phi dependence in the coordinates, we have no \hat{\mathbf{r}} nor \hat{\boldsymbol{\theta}} terms in the curl, which is good. Our curl is now just

\begin{aligned}\nabla \times \mathbf{B} &=\frac{1}{{r }} \left( A\sin\theta \partial_r \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} +\frac{2 A \sin\theta}{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta \frac{1}{{r }} \left(\partial_r \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} +\frac{2 }{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta e^{i k r}\frac{1}{{r }} \left((ik)\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) +\left( \frac{i}{k r^2} - \frac{2}{k^2 r^3} \right) +\frac{2 }{k r^2} \left( -i - \frac{1}{k r} \right) \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta e^{i k r}\frac{1}{{r }} \left(i k + \frac{1}{{r}} - \frac{ 4 }{k^2 r^3}\right) \hat{\boldsymbol{\phi}} \\ \end{aligned}

What we expect is \nabla \times \mathbf{B} = - i k \mathbf{E} which is

\begin{aligned}- i k \mathbf{E} =A \sin\theta e^{i k r}\frac{1}{{r }} \left(- i k - \frac{1}{{r}}\right) \hat{\boldsymbol{\phi}} \end{aligned} \hspace{\stretch{1}}(2.58)

FIXME: Somewhere I must have made a sign error, because these aren’t matching! Have an extra 1/r^3 term and the wrong sign on the 1/r term.

Part 2. Poynting and intensity.

Our Poynting vector is

\begin{aligned}\vec{S} = \frac{c}{4 \pi} \vec{E} \times \vec{B},\end{aligned} \hspace{\stretch{1}}(2.59)

which we could calculate from 2.34, and 2.55. However, that looks like it’s going to be a mess to multiply out. Let’s use instead the trick from \S 48 of the course text [1], and work with the complex quantities directly, noting that we have

\begin{aligned}(\text{Real} \mathbf{E} e^{i \alpha}) \times (\text{Real} \mathbf{B} e^{i \alpha}) &= \frac{1}{{4}} ( \mathbf{E} e^{i \alpha} + \mathbf{E}^{*} e^{-i \alpha}) \times ( \mathbf{B} e^{i \alpha} + \mathbf{B}^{*} e^{-i \alpha}) \\ &= \frac{1}{{2}} \text{Real} \left( \mathbf{E} \times \mathbf{B}^{*} + (\mathbf{E} \times \mathbf{B}) e^{2 i \alpha} \right).\end{aligned}

Now we can do the Poynting calculation using the simpler relations 2.52, 2.54.

Let’s also write

\begin{aligned}\mathbf{E} &= A e^{i k r} E_\phi \hat{\boldsymbol{\phi}} \\ \mathbf{B} &= A e^{i k r} ( B_r \hat{\mathbf{r}} + B_\theta \hat{\boldsymbol{\theta}} )\end{aligned} \hspace{\stretch{1}}(2.60)


\begin{aligned}E_\phi &= \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} \right)  \\ B_r &= -\frac{2 \cos\theta}{k r^2} \left( i + \frac{1}{k r} \right)  \\ B_\theta &= - \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \end{aligned} \hspace{\stretch{1}}(2.62)

So our Poynting vector is

\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \text{Real}\left(E_\phi \hat{\boldsymbol{\phi}} \times ( B_r^{*} \hat{\mathbf{r}} + B_\theta^{*} \hat{\boldsymbol{\theta}} )+E_\phi \hat{\boldsymbol{\phi}} \times ( B_r \hat{\mathbf{r}} + B_\theta \hat{\boldsymbol{\theta}} ) e^{ 2 i ( k r - \omega t ) }\right) \\ \end{aligned}

Note that our unit vector basis \{ \hat{\mathbf{r}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\phi}} \} was rotated from \{ \hat{\mathbf{z}}, \hat{\mathbf{x}}, \hat{\mathbf{y}} \}, so we have

\begin{aligned}\hat{\boldsymbol{\phi}} \times \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \hat{\boldsymbol{\theta}} \times \hat{\boldsymbol{\phi}} &= \hat{\mathbf{r}} \\ \hat{\mathbf{r}} \times \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} ,\end{aligned} \hspace{\stretch{1}}(2.65)

and plug this into our Poynting expression

\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \text{Real}\left(E_\phi B_r^{*} \hat{\boldsymbol{\theta}} -E_\phi B_\theta^{*} \hat{\mathbf{r}} +(E_\phi B_r \hat{\boldsymbol{\theta}} -E_\phi B_\theta \hat{\mathbf{r}} )e^{ 2 i ( k r - \omega t ) }\right) \\ \end{aligned}

Now we have to multiply out our terms. We have

\begin{aligned}E_\phi B_r^{*} &=- \frac{\sin\theta}{r} \frac{2 \cos\theta}{k r^2} \left( 1 - \frac{i}{k r} \right)\left( -i + \frac{1}{k r} \right) \\ &=-\frac{ \sin(2\theta)}{k r^3}\left( -i - \frac{i}{k^2 r^2} \right),\end{aligned}

Since this has no real part, there is no average contribution to \vec{S} in the \hat{\boldsymbol{\theta}} direction. What do we have for the time dependent part

\begin{aligned}E_\phi B_r &=- \frac{\sin\theta}{r} \frac{2 \cos\theta}{k r^2} \left( 1 - \frac{i}{k r} \right)\left( i + \frac{1}{k r} \right) \\ &=-\frac{ \sin(2\theta)}{k r^3}\left( i + \frac{2}{k r} - \frac{i}{k^2 r^2} \right) \end{aligned}

This is non zero, so we have a time dependent \hat{\boldsymbol{\theta}} contribution that averages out. Moving on

\begin{aligned}- E_\phi B_\theta^{*}&= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{i}{k r} \right)\left( 1 + \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \\ &= \frac{\sin^2\theta}{r^2} \left( 1 + \frac{2}{k^2 r^2} - \frac{i}{k^3 r^3}\right).\end{aligned}

This is non-zero, so the steady state Poynting vector is in the outwards radial direction. The last piece is

\begin{aligned}- E_\phi B_\theta&= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{i}{k r} \right)\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \\ &= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{2i}{k r} - \frac{i}{k^3 r^3}\right).\end{aligned}

Assembling all the results we have

\begin{aligned}\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \frac{\sin^2\theta}{r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \\ &\quad +\frac{A^2 c}{2 \pi} \text{Real} \left(\left(-\frac{ \sin(2\theta)}{k r^3} \left( i + \frac{2}{k r} - \frac{i}{k^2 r^2} \right) \hat{\boldsymbol{\theta}}+\frac{\sin^2\theta}{r^2} \left( 1 - \frac{2i}{k r} - \frac{i}{k^3 r^3}\right) \hat{\mathbf{r}} \right) e^{ 2 i ( k r - \omega t ) }\right) \end{aligned}\end{aligned}

We can read off the intensity directly

\begin{aligned}\vec{I} = \left\langle{{\vec{S}}}\right\rangle = \frac{A^2 c \sin^2 \theta}{2 \pi r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \end{aligned} \hspace{\stretch{1}}(2.68)

Part 3. Find the power.

Through a surface of radius r, integration of the intensity vector 2.68 is

\begin{aligned}\int r^2 \sin\theta d\theta d\phi\vec{I} &= \int r^2 \sin\theta d\theta d\phi \frac{A^2 c \sin^2 \theta}{2 \pi r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \\ &= A^2 c \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \int_0^\pi \sin^3\theta d\theta \\ &= A^2 c \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} {\left.\frac{1}{{12}}( \cos(3\theta) - 9 \cos\theta )\right\vert}_0^\pi.\end{aligned}

Our average power through the surface is therefore

\begin{aligned}\int d^2 \boldsymbol{\sigma} \vec{I} =\frac{4 A^2 c }{3}\left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}}.\end{aligned} \hspace{\stretch{1}}(2.69)

Notes on grading of my solution.

Problem 2 above was the graded portion.

FIXME1: I lost a mark in the spot I expected, where I failed to verify one of the Maxwell equations. I’ll still need to figure out what got messed up there.

What occured to me later, also mentioned in the grading of the solution was that Maxwell’s equations in the space-time domain could have been used to solve for {\partial {\mathbf{B}}}/{\partial {t}} instead of all the momentum space logic (which simplified some things, but probably complicated others).

FIXME2: I lost a mark on 2.68 with a big X beside it. I’ll have to read the graded solution to see why.

FIXME3: Lost a mark for the final average power result 2.69. Again, I’ll have to go back and figure out why.


[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

[3] JD Jackson. Classical Electrodynamics Wiley. John Wiley and Sons, 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 5 (TA: Simon Freedman). Angular momentum of EM fields

Posted by peeterjoot on March 10, 2011

[Click here for a PDF of this post with nicer formatting]


Long solenoid of radius R, n turns per unit length, current I. Coaxial with with solenoid are two long cylindrical shells of length l and (\text{radius},\text{charge}) of (a, Q), and (b, -Q) respectively, where a < b.

When current is gradually reduced what happens?

The initial fields.

Initial Magnetic field.

For the initial static conditions where we have only a (constant) magnetic field, the Maxwell-Ampere equation takes the form

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \frac{4 \pi}{c} \mathbf{j}\end{aligned} \hspace{\stretch{1}}(1.1)

\paragraph{On the name of this equation}. In notes from one of the lectures I had this called Maxwell-Faraday equation, despite the fact that this isn’t the one that Maxwell made his displacement current addition. Did the Professor call it that, or was this my addition? In [2] Faraday’s law is also called the Maxwell-Faraday equation. [1] calls this the Ampere-Maxwell equation, which makes more sense.

Put into integral form by integrating over an open surface we have

\begin{aligned}\int_A (\boldsymbol{\nabla} \times \mathbf{B}) \cdot d\mathbf{a} = \frac{4 \pi}{c} \int_A \mathbf{j} \cdot d\mathbf{a}\end{aligned} \hspace{\stretch{1}}(1.2)

The current density passing through the surface is defined as the enclosed current, circulating around the bounding loop

\begin{aligned}I_{\text{enc}} = \int_A \mathbf{j} \cdot d\mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.3)

so by Stokes Theorem we write

\begin{aligned}\int_{\partial A} \mathbf{B} \cdot d\mathbf{l} = \frac{4 \pi}{c} I_{\text{enc}}\end{aligned} \hspace{\stretch{1}}(1.4)

Now consider separately the regions inside and outside the cylinder. Inside we have

\begin{aligned}\int_{\partial A} B \cdot d \mathbf{l} = \frac{4 \pi I }{c} = 0,\end{aligned} \hspace{\stretch{1}}(1.5)

Outside of the cylinder we have the equivalent of n loops, each with current I, so we have

\begin{aligned}\int \mathbf{B} \cdot d\mathbf{l} = \frac{4 \pi n I L}{c} = B L.\end{aligned} \hspace{\stretch{1}}(1.6)

Our magnetic field is constant while I is constant, and in vector form this is

\begin{aligned}\mathbf{B} = \frac{4 \pi n I}{c} \hat{\mathbf{z}}\end{aligned} \hspace{\stretch{1}}(1.7)

Initial Electric field.

How about the electric fields?

For $latex r b$ we have \mathbf{E} = 0 since there is no charge enclosed by any Gaussian surface that we choose.

Between a and b we have, for a Gaussian surface of height l (assuming that l \gg a)

\begin{aligned}E (2 \pi r) l = 4 \pi (+Q),\end{aligned} \hspace{\stretch{1}}(1.8)

so we have

\begin{aligned}\mathbf{E} = \frac{2 Q }{r l} \hat{\mathbf{r}}.\end{aligned} \hspace{\stretch{1}}(1.9)

Poynting vector before the current changes.

Our Poynting vector, the energy flux per unit time, is

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} (\mathbf{E} \times \mathbf{B})\end{aligned} \hspace{\stretch{1}}(1.10)

This is non-zero only in the region both between the solenoid and the enclosing cylinder (radius b) since that’s the only place where both \mathbf{E} and \mathbf{B} are non-zero. That is

\begin{aligned}\mathbf{S} &= \frac{c}{4 \pi} (\mathbf{E} \times \mathbf{B}) \\ &=\frac{c}{4 \pi} \frac{2 Q }{r l} \frac{4 \pi n I}{c} \hat{\mathbf{r}} \times \hat{\mathbf{z}} \\ &= -\frac{2 Q n I}{r l} \hat{\boldsymbol{\phi}}\end{aligned}

(since \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} = \hat{\mathbf{z}}, so \hat{\mathbf{z}} \times \hat{\mathbf{r}} = \hat{\boldsymbol{\phi}} after cyclic permutation)

A motivational aside: Momentum density.

Suppose {\left\lvert{\mathbf{E}}\right\rvert} = {\left\lvert{\mathbf{B}}\right\rvert}, then our Poynting vector is

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} = \frac{ c \hat{\mathbf{k}}}{4 \pi} \mathbf{E}^2,\end{aligned} \hspace{\stretch{1}}(1.11)


\begin{aligned}\mathcal{E} = \text{energy density} = \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} = \frac{\mathbf{E}^2}{4 \pi},\end{aligned} \hspace{\stretch{1}}(1.12)


\begin{aligned}\mathbf{S} = c \hat{\mathbf{k}} \mathcal{E} = \mathbf{v} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.13)

Now recall the between (relativistic) mechanical momentum \mathbf{p} = \gamma m \mathbf{v} and energy \mathcal{E} = \gamma m c^2

\begin{aligned}\mathbf{p} = \frac{\mathbf{v}}{c^2} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.14)

This justifies calling the quantity

\begin{aligned}\mathbf{P}_{\text{EM}} = \frac{\mathbf{S}}{c^2},\end{aligned} \hspace{\stretch{1}}(1.15)

the momentum density.

Momentum density of the EM fields.

So we label our scaled Poynting vector the momentum density for the field

\begin{aligned}\mathbf{P}_{\text{EM}} = -\frac{2 Q n I}{c^2 r l} \hat{\boldsymbol{\phi}},\end{aligned} \hspace{\stretch{1}}(1.16)

and can now compute an angular momentum density in the field between the solenoid and the outer cylinder prior to changing the currents

\begin{aligned}\mathbf{L}_{\text{EM}}&= \mathbf{r} \times \mathbf{P}_{\text{EM}} \\ &= r \hat{\mathbf{r}} \times \mathbf{P}_{\text{EM}} \\ \end{aligned}

This gives us

\begin{aligned}\mathbf{L}_{\text{EM}} = -\frac{2 Q n I}{c^2 l} \hat{\mathbf{z}} = \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.17)

Note that this is the angular momentum density in the region between the solenoid and the inner cylinder, between z = 0 and z = l. Outside of this region, the angular momentum density is zero.

After the current is changed

Induced electric field

When we turn off (or change) I, some of the magnetic field \mathbf{B} will be converted into electric field \mathbf{E} according to Faraday’s law

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.18)

In integral form, utilizing an open surface, this is

\begin{aligned}\int_A (\boldsymbol{\nabla} \times \mathbf{l}) \cdot \hat{\mathbf{n}} dA&=\int_{\partial A} \mathbf{E} \cdot d\mathbf{l} \\ &= - \frac{1}{{c}} \int_A \frac{\partial {\mathbf{B}}}{\partial {t}} \cdot d\mathbf{A} \\ &= - \frac{1}{{c}} \frac{\partial {\Phi_B(t)}}{\partial {t}},\end{aligned}

where we introduce the magnetic flux

\begin{aligned}\Phi_B(t) = \int_A \mathbf{B} \cdot d\mathbf{A}.\end{aligned} \hspace{\stretch{1}}(1.19)

We can utilizing a circular surface cutting directly across the cylinder perpendicular to \hat{\mathbf{z}} of radius r. Recall that we have the magnetic field 1.7 only inside the solenoid. So for r < R this flux is

\begin{aligned}\Phi_B(t)&= \int_A \mathbf{B} \cdot d\mathbf{A} \\ &= (\pi r^2) \frac{4 \pi n I(t)}{c}.\end{aligned}

For r > R only the portion of the surface with radius r \le R contributes to the flux

\begin{aligned}\Phi_B(t)&= \int_A \mathbf{B} \cdot d\mathbf{A} \\ &= (\pi R^2) \frac{4 \pi n I(t)}{c}.\end{aligned}

We can now compute the circulation of the electric field

\begin{aligned}\int_{\partial A} \mathbf{E} \cdot d\mathbf{l} = - \frac{1}{{c}} \frac{\partial {\Phi_B(t)}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(1.20)

by taking the derivatives of the magnetic flux. For r > R this is

\begin{aligned}\int_{\partial A} \mathbf{E} \cdot d\mathbf{l}&= (2 \pi r) E \\ &=-(\pi R^2) \frac{4 \pi n \dot{I}(t)}{c^2}.\end{aligned}

This gives us the magnitude of the induced electric field

\begin{aligned}E&= -(\pi R^2) \frac{4 \pi n \dot{I}(t)}{2 \pi r c^2} \\ &= -\frac{2 \pi R^2 n \dot{I}(t)}{r c^2}.\end{aligned}

Similarly for r < R we have

\begin{aligned}E = -\frac{2 \pi r n \dot{I}(t)}{c^2}\end{aligned} \hspace{\stretch{1}}(1.21)

Summarizing we have

\begin{aligned}\mathbf{E} =\left\{\begin{array}{l l}-\frac{2 \pi r n \dot{I}(t)}{c^2} \hat{\boldsymbol{\phi}} 		& \mbox{For latex r R$}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.22)$

Torque and angular momentum induced by the fields.

Our torque \mathbf{N} = \mathbf{r} \times \mathbf{F} = d\mathbf{L}/dt on the outer cylinder (radius b) that is induced by changing the current is

\begin{aligned}\mathbf{N}_b&= (b \hat{\mathbf{r}}) \times (-Q \mathbf{E}_{r = b}) \\ &= b Q \frac{2 \pi R^2 n \dot{I}(t)}{b c^2} \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} \\ &= \frac{1}{{c^2}} 2 \pi R^2 n Q \dot{I} \hat{\mathbf{z}}.\end{aligned}

This provides the induced angular momentum on the outer cylinder

\begin{aligned}\mathbf{L}_b&= \int dt \mathbf{N}_b = \frac{ 2 \pi n R^2 Q}{c^2} \int_I^0 \frac{dI}{dt} dt \\ &= -\frac{2 \pi n R^2 Q}{c^2} I.\end{aligned}

This is the angular momentum of b induced by changing the current or changing the magnetic field.

On the inner cylinder we have

\begin{aligned}\mathbf{N}_a&= (a \hat{\mathbf{r}} ) \times (Q \mathbf{E}_{r = a}) \\ &= a Q \left(- \frac{2 \pi}{c} n a \dot{I} \right) \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} \\ &= -\frac{2 \pi n a^2 Q \dot{I}}{c^2} \hat{\mathbf{z}}.\end{aligned}

So our induced angular momentum on the inner cylinder is

\begin{aligned}\mathbf{L}_a = \frac{2 \pi n a^2 Q I}{c^2} \hat{\mathbf{z}}.\end{aligned} \hspace{\stretch{1}}(1.23)

The total angular momentum in the system has to be conserved, and we must have

\begin{aligned}\mathbf{L}_a + \mathbf{L}_b = -\frac{2 n I Q}{c^2} \pi (R^2 - a^2) \hat{\mathbf{z}}.\end{aligned} \hspace{\stretch{1}}(1.24)

At the end of the tutorial, this sum was equated with the field angular momentum density \mathbf{L}_{\text{EM}}, but this has different dimensions. In fact, observe that the volume in which this angular momentum density is non-zero is the difference between the volume of the solenoid and the inner cylinder

\begin{aligned}V = \pi R^2 l - \pi a^2 l,\end{aligned} \hspace{\stretch{1}}(1.25)

so if we are to integrate the angular momentum density 1.17 over this region we have

\begin{aligned}\int \mathbf{L}_{\text{EM}} dV = -\frac{2 Q n I}{c^2} \pi (R^2 - a^2) \hat{\mathbf{z}}\end{aligned} \hspace{\stretch{1}}(1.26)

which does match with the sum of the mechanical angular momentum densities 1.24 as expected.


[1] D. Fleisch. A Student’s Guide to Maxwell’s Equations. Cambridge University Press, 2007. ““.

[2] Wikipedia. Faraday’s law of induction — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 10-March-2011].\%27s_law_of_induction&oldid=416715237.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 17 (Taught by Prof. Erich Poppitz). Energy and momentum density. Starting a Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 8, 2011

[Click here for a PDF of this post with nicer formatting]


Covering chapter 6 material \S 31, and starting chapter 8 material from the text [1].

Covering lecture notes pp. 128-135: energy flux and momentum density of the EM wave (128-129); radiation pressure, its discovery and significance in physics (130-131); EM fields of moving charges: setting up the wave equation with a source (132-133); the convenience of Lorentz gauge in the study of radiation (134); reminder on Green’s functions from electrostatics (135) [Tuesday, Mar. 8]

Review. Energy density and Poynting vector.

Last time we showed that Maxwell’s equations imply

\begin{aligned}\frac{\partial }{\partial t} \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = -\mathbf{j} \cdot \mathbf{E} - \boldsymbol{\nabla} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.1)

In the lecture, Professor Poppitz said he was free here to use a full time derivative. When asked why, it was because he was considering \mathbf{E} and \mathbf{B} here to be functions of time only, since they were measured at a fixed point in space. This is really the same thing as using a time partial, so in these notes I’ll just be explicit and stick to using partials.

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\frac{\partial }{\partial {t}} \int_V \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = - \int_V \mathbf{j} \cdot \mathbf{E} - \int_{\partial_V} d^2 \boldsymbol{\sigma} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.3)

Any change in the energy must either due to currents, or energy escaping through the surface.

\begin{aligned}\mathcal{E} = \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} &= \mbox{Energy density of the EM field} \\ \mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} &= \mbox{Energy flux of the EM fields}\end{aligned} \hspace{\stretch{1}}(2.4)

The energy flux of the EM field: this is the energy flowing through d^2 \mathbf{A} in unit time (\mathbf{S} \cdot d^2 \mathbf{A}).

How about electromagnetic waves?

In a plane wave moving in direction \mathbf{k}.

PICTURE: \mathbf{E} \parallel \hat{\mathbf{z}}, \mathbf{B} \parallel \hat{\mathbf{x}}, \mathbf{k} \parallel \hat{\mathbf{y}}.

So, \mathbf{S} \parallel \mathbf{k} since \mathbf{E} \times \mathbf{B} \propto \mathbf{k}.

{\left\lvert{\mathbf{S}}\right\rvert} for a plane wave is the amount of energy through unit area perpendicular to \mathbf{k} in unit time.

Recall that we calculated

\begin{aligned}\mathbf{B} &= (\mathbf{k} \times \boldsymbol{\beta}) \sin(\omega t - \mathbf{k} \cdot \mathbf{x}) \\ \mathbf{E} &= \boldsymbol{\beta} {\left\lvert{\mathbf{k}}\right\rvert} \sin(\omega t - \mathbf{k} \cdot \mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.6)

Since we had \mathbf{k} \cdot \boldsymbol{\beta} = 0, we have {\left\lvert{\mathbf{E}}\right\rvert} = {\left\lvert{\mathbf{B}}\right\rvert}, and our Poynting vector follows nicely

\begin{aligned}\mathbf{S} &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} \frac{c}{4 \pi} \mathbf{E}^2  \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} c \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} e \mathcal{E}\end{aligned}

\begin{aligned}[\mathbf{S}] = \frac{\text{energy}}{\text{time latex \times$ area}} = \frac{\text{momentum \times speed}}{\text{time \times area}}\end{aligned} \hspace{\stretch{1}}(3.8)$

\begin{aligned}\left[\frac{\mathbf{S}}{c^2} \right] &= \frac{\text{momentum}}{\text{time latex \times$ area \times speed}} \\ &= \frac{\text{momentum}}{\text{area \times distance}} \\ &= \frac{\text{momentum}}{\text{volume}} \\ \end{aligned} $

So we wee that \mathbf{S}/c^2 is indeed rightly called “the momentum density” of the EM field.

We will later find that \mathcal{E} and \mathbf{S} are components of a rank-2 four tensor

\begin{aligned}T^{ij} = \begin{bmatrix}\mathcal{E} & \frac{S^1}{c^2} & \frac{S^2}{c^2} & \frac{S^3}{c^2} \\ \frac{S^1}{c^2} & & & \\ \frac{S^1}{c^2} & & \begin{bmatrix}\sigma^{\alpha\beta} \end{bmatrix}& \\ \frac{S^1}{c^2} & & & \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.9)

where \sigma^{\alpha\beta} is the stress tensor. We will get to all this in more detail later.

For EM wave we have

\begin{aligned}\mathbf{S} = \hat{\mathbf{k}} c \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.10)

(this is the energy flux)

\begin{aligned}\frac{\mathbf{S}}{c^2} = \hat{\mathbf{k}} \frac{\mathcal{E}}{c}\end{aligned} \hspace{\stretch{1}}(3.11)

(the momentum density of the wave).

\begin{aligned}c {\left\lvert{\frac{\mathbf{S}}{c^2}}\right\rvert} = \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.12)

(recall \mathcal{E} = c\mathcal{\mathbf{p}} for massless particles.

EM waves carry energy and momentum so when absorbed or reflected these are transferred to bodies.

Kepler speculated that this was the fact because he had observed that the tails of the comets were being pushed by the sunlight, since the tails faced away from the sun.

Maxwell also suggested that light would extort a force (presumably he wrote down the “Maxwell stress tensor” T^{ij} that is named after him).

This was actually measured later in 1901, by Peter Lebedev (Russia).

PICTURE: pole with flags in vacuum jar. Black (absorber) on one side, and Silver (reflector) on the other. Between the two of these, momentum conservation will introduce rotation (in the direction of the silver).

This is actually a tricky experiment and requires the vacuum, since the black surface warms up, and heats up the nearby gas molecules, which causes a rotation in the opposite direction due to just these thermal effects.

Another example (a factor) that prevents star collapse under gravitation is the radiation pressure of the light.

Moving on. Solving Maxwell’s equation

Our equations are

\begin{aligned}\epsilon^{i j k l} \partial_j F_{k l} &= 0 \\ \partial_i F^{i k} &= \frac{4 \pi}{c} j^k,\end{aligned} \hspace{\stretch{1}}(4.13)

where we assume that j^k(\mathbf{x}, t) is a given. Our task is to find F^{i k}, the (\mathbf{E}, \mathbf{B}) fields.

Proceed by finding A^i. First, as usual when F_{i j} = \partial_i A_j - \partial_j A_i. The Bianchi identity is satisfied so we focus on the current equation.

In terms of potentials

\begin{aligned}\partial_i (\partial^i A^k - \partial^k A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.15)


\begin{aligned}\partial_i \partial^i A^k - \partial^k (\partial_i A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.16)

We want to work in the Lorentz gauge \partial_i A^i = 0. This is justified by the simplicity of the remaining problem

\begin{aligned}\partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.17)


\begin{aligned}\partial_i \partial^i = \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} - \Delta = \square\end{aligned} \hspace{\stretch{1}}(4.18)


\begin{aligned}\Delta = \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2}\end{aligned} \hspace{\stretch{1}}(4.19)

This \square is the d’Alembert operator (“d’Alembertian”).

Our equation is

\begin{aligned}\square A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.20)

(in the Lorentz gauge)

If we learn how to solve (**), then we’ve learned all.

Method: Green’s function’s

In electrostatics where j^0 = 0, A^0 \ne 0 only, we have

\begin{aligned}\Delta A^0 = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(4.21)


\begin{aligned}\Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') = \delta^3( \mathbf{x} - \mathbf{x}')\end{aligned} \hspace{\stretch{1}}(4.22)


\begin{aligned}\rho(\mathbf{x}') d^3 \mathbf{x}'\end{aligned} \hspace{\stretch{1}}(4.23)

(a small box)

acting through distance {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, acting at point \mathbf{x}. With G(\mathbf{x}, \mathbf{x}') = 1/4 \pi{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, we have

\begin{aligned}\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \int d^3 \mathbf{x}' \delta^3( \mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \rho(\mathbf{x})\end{aligned}

Also since G is deemed a linear operator, we have \Delta_\mathbf{x} G = G \Delta_\mathbf{x}, we find

\begin{aligned}\rho(\mathbf{x})&=\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') 4 \pi \rho(\mathbf{x}') \\ &=\int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}} \rho(\mathbf{x}').\end{aligned}

We end up finding that

\begin{aligned}\phi(\mathbf{x}) = \int \frac{\rho(\mathbf{x}')}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(4.24)

thus solving the problem. We wish next to do this for the Maxwell equation 4.20.

The Green’s function method is effective, but I can’t help but consider it somewhat of a cheat, since one has to through higher powers know what the Green’s function is. In the electrostatics case, at least we can work from the potential function and take it’s Laplacian to find that this is equivalent (thus implictly solving for the Green’s function at the same time). It will be interesting to see how we do this for the forced d’Alembertian equation.


[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 16 (Taught by Prof. Erich Poppitz). Monochromatic EM fields. Poynting vector and energy density conservation

Posted by peeterjoot on March 3, 2011

[Click here for a PDF of this post with nicer formatting]


Covering chapter 6 material from the text [1].

Covering lecture notes pp. 115-127: properties of monochromatic plane EM waves (122-124); energy and energy flux of the EM field and energy conservation from the equations of motion (125-127) [Wednesday, Mar. 2]

Review. Solution to the wave equation.

Recall that in the Coulomb gauge

\begin{aligned}A^0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(2.1)

our equation to solve is

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A} = 0.\end{aligned} \hspace{\stretch{1}}(2.3)

We found that the general solution was

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3\mathbf{k}}{(2 \pi)^3} \left(e^{i (\mathbf{k} \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(-\mathbf{k})+e^{i (\mathbf{k} \cdot \mathbf{x} - \omega_k t)} \boldsymbol{\beta}(\mathbf{k})\right)\end{aligned} \hspace{\stretch{1}}(2.4)


\begin{aligned}\mathbf{k} \cdot \boldsymbol{\beta}(\mathbf{k}) = 0\end{aligned} \hspace{\stretch{1}}(2.5)

It is clear that this is a solution since

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) e^{i (\mathbf{k} \cdot \mathbf{x} \pm \omega_k t)} = 0\end{aligned} \hspace{\stretch{1}}(2.6)

Moving to physically relevant results.

Since the most general solution is a sum over \mathbf{k}, it is enough to consider only a single \mathbf{k}, or equivalently, take

\begin{aligned}\boldsymbol{\beta}(\mathbf{k}) &= \boldsymbol{\beta} ( 2\pi)^3 \delta^3(\mathbf{k} - \mathbf{p}) \\ \boldsymbol{\beta}^{*}(-\mathbf{k}) &= \boldsymbol{\beta}^{*} ( 2\pi)^3 \delta^3(-\mathbf{k} - \mathbf{p})\end{aligned} \hspace{\stretch{1}}(3.7)

but we have the freedom to pick a real and constant \boldsymbol{\beta}. Now our solution is

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \boldsymbol{\beta} \left(e^{-i (\mathbf{p} \cdot \mathbf{x} + \omega_k t)}+e^{i (\mathbf{p} \cdot \mathbf{x} - \omega_k t)}\right)= \boldsymbol{\beta} \cos( \omega t - \mathbf{p} \cdot \mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.9)


\begin{aligned}\boldsymbol{\beta} \cdot \mathbf{p} = 0\end{aligned} \hspace{\stretch{1}}(3.10)

FIXME:DIY: show that also using \boldsymbol{\beta} complex also works.

Let’s choose

\begin{aligned}\mathbf{p} = (p, 0, 0)\end{aligned} \hspace{\stretch{1}}(3.11)


\begin{aligned}\mathbf{p} \cdot \boldsymbol{\beta} = p_x \beta_x\end{aligned} \hspace{\stretch{1}}(3.12)

we must have

\begin{aligned}\boldsymbol{\beta}_x = 0\end{aligned} \hspace{\stretch{1}}(3.13)


\begin{aligned}\boldsymbol{\beta} = (0, \beta_y, \beta_z)\end{aligned} \hspace{\stretch{1}}(3.14)

\paragraph{Claim:} The Coulomb gauge 0 = \boldsymbol{\nabla} \cdot \mathbf{A} = (\boldsymbol{\beta} \cdot \mathbf{p})\sin(\omega t - \mathbf{p} \cdot \mathbf{x}) implies that there are two linearly independent choices of \boldsymbol{\beta} and \mathbf{p}.

FIXME: missing exactly how this is?


\boldsymbol{\beta}_1, \boldsymbol{\beta}_2, \mathbf{p} all mutually perpendicular.

\begin{aligned}\mathbf{E}&= -\frac{\partial {\mathbf{A}}}{\partial {ct}}  \\ &= -\frac{\boldsymbol{\beta}}{c} \frac{\partial {}}{\partial {t}} \cos(\omega t - \mathbf{p} \cdot \mathbf{x}) \\ &= -\frac{1}{{c}} \boldsymbol{\beta} \omega_p\sin(\omega t - \mathbf{p} \cdot \mathbf{x})\end{aligned}

(recall: \omega_p = c{\left\lvert{\mathbf{p}}\right\rvert})

\begin{aligned}\boxed{\mathbf{E} = \boldsymbol{\beta} {\left\lvert{\mathbf{p}}\right\rvert} \sin(\omega t - \mathbf{p} \cdot \mathbf{x})}\end{aligned} \hspace{\stretch{1}}(3.15)

\begin{aligned}\mathbf{B}&= \boldsymbol{\nabla} \times \mathbf{A} \\ &= \boldsymbol{\nabla} \times ( \boldsymbol{\beta} \cos(\omega t - \mathbf{p} \cdot \mathbf{x}) \\ &= (\boldsymbol{\nabla} \cos(\omega t - \mathbf{p} \cdot \mathbf{x})) \times \boldsymbol{\beta} \\ &= \sin(\omega t - \mathbf{p} \cdot \mathbf{x}) \mathbf{p} \times \boldsymbol{\beta}\end{aligned}

\begin{aligned}\boxed{\mathbf{B} = (\mathbf{p} \times \boldsymbol{\beta}) \sin(\omega t - \mathbf{p} \cdot \mathbf{x})}\end{aligned} \hspace{\stretch{1}}(3.16)

\paragraph{Example:} \mathbf{p} \parallel \mathbf{e}_x, \mathbf{B} \parallel \mathbf{e}_y or \mathbf{e}_z

(since we have two linearly independent choices)

\paragraph{Example:} take \boldsymbol{\beta} \parallel \mathbf{e}_y

\begin{aligned}\mathbf{E} &= \boldsymbol{\beta} p \sin(c p t - p x)  \\ \mathbf{B} &= (\mathbf{p} \times \boldsymbol{\beta}) \sin(c p t - p x)\end{aligned} \hspace{\stretch{1}}(3.17)

At t = 0

\begin{aligned}\mathbf{E} &= -\boldsymbol{\beta} p \sin( p x)  \\ B_z &= - {\left\lvert{\boldsymbol{\beta}}\right\rvert} \mathbf{e}_z c p \sin(p x)\end{aligned} \hspace{\stretch{1}}(3.19)

PICTURE: two oscillating mutually perpendicular sinusoids.

So physically, we see that \mathbf{p} is the direction of propagation. We have always

\begin{aligned}\mathbf{p} \perp \mathbf{E}\end{aligned} \hspace{\stretch{1}}(3.21)

and we have two possible polarizations.

Convention is usually to take the direction of oscillation of \mathbf{E} the polarization of the wave.

This is the starting point for the field of optics, because the polarization of the incident wave, is strongly tied to how much of the wave will reflect off of a surface with a given index of refraction n.

EM waves carrying energy and momentum

Maxwell field in vacuum is the sum of plane monochromatic waves, two per wave vector.


\begin{aligned}\mathbf{E} &\parallel \mathbf{e}_3 \\ \mathbf{B} &\parallel \mathbf{e}_1 \\ \mathbf{k} &\parallel \mathbf{e}_2\end{aligned}


\begin{aligned}\mathbf{B} &\parallel -\mathbf{e}_3 \\ \mathbf{E} &\parallel \mathbf{e}_1 \\ \mathbf{k} &\parallel \mathbf{e}_2\end{aligned}

(two linearly independent polarizations)

Our wave frequency is

\begin{aligned}\omega_{\mathbf{k}} = c {\left\lvert{\mathbf{k}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.22)

The wavelength, the value such that x \rightarrow x + \frac{2 \pi}{k}


\begin{aligned}\sin(k c t - k x)\end{aligned} \hspace{\stretch{1}}(4.23)

\begin{aligned}\lambda_{\mathbf{k}} = \frac{2 \pi}{k}\end{aligned} \hspace{\stretch{1}}(4.24)


\begin{aligned}T = \frac{ 2 \pi} {k c} = \frac{\lambda_\mathbf{k}}{c}\end{aligned} \hspace{\stretch{1}}(4.25)

Energy and momentum of EM waves.

Classical mechanics motivation.

To motivate our approach, let’s recall one route from our equations of motion in classical mechanics, to the energy conservation relation. Our EOM in one dimension is

\begin{aligned}m \frac{d}{dt} \dot{x} = - \mathcal{U}'(x).\end{aligned} \hspace{\stretch{1}}(5.26)

We can multiply both sides by what we take the time derivative of

\begin{aligned}m \dot{x} \frac{d{{\dot{x}}}}{dt} = - \dot{x} \mathcal{U}'(x),\end{aligned} \hspace{\stretch{1}}(5.27)

and then manipulate it a bit so that we have time derivatives on both sides

\begin{aligned}\frac{d{{}}}{dt} \frac{m \dot{x}^2}{2} = - \frac{d{{ \mathcal{U}(x) }}}{dt}.\end{aligned} \hspace{\stretch{1}}(5.28)

Taking differences, we have

\begin{aligned}\frac{d{{}}}{dt} \left( \frac{m \dot{x}^2}{2} + \mathcal{U}(x) \right) = 0,\end{aligned} \hspace{\stretch{1}}(5.29)

which allows us to find a conservation relationship that we label energy conservation (\mathcal{E} = K + \mathcal{U}).

Doing the same thing for Maxwell’s equations.

Poppitz claims we have very little tricks in physics, and we really just do the same thing for our EM case. Our equations are a bit messier to start with, and for the vacuum, our non-divergence equations are

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} -\frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{j} \\ \boldsymbol{\nabla} \times \mathbf{E} +\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0\end{aligned} \hspace{\stretch{1}}(5.30)

We can dot these with \mathbf{E} and \mathbf{B} respectively, repeating the trick of “multiplying” by what we take the time derivative of

\begin{aligned}\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) -\frac{1}{{c}} \mathbf{E} \cdot \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j} \\ \mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) +\frac{1}{{c}} \mathbf{B} \cdot \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0,\end{aligned} \hspace{\stretch{1}}(5.32)

and then take differences

\begin{aligned}\frac{1}{{c}} \left(\mathbf{B} \cdot \frac{\partial {\mathbf{B}}}{\partial {t}}+ \mathbf{E} \cdot \frac{\partial {\mathbf{E}}}{\partial {t}} \right) + \mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) -\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) =-\frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.34)


\begin{aligned}-\mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) +\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) = \boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(5.35)

This is almost trivial with an expansion of the RHS in tensor notation

\begin{aligned}\boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} )&=\partial_\alpha e^{\alpha \beta \sigma} B^\beta E^\sigma \\ &=e^{\alpha \beta \sigma} (\partial_\alpha B^\beta) E^\sigma+e^{\alpha \beta \sigma} B^\beta (\partial_\alpha E^\sigma) \\ &=\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B})-\mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E})\qquad \square\end{aligned}

Regrouping we have

\begin{aligned}\frac{1}{{2 c}} \frac{\partial {}}{\partial {t}} \left(\mathbf{B}^2 + \mathbf{E}^2 \right) - \boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} )=-\frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.36)

A final rescaling makes the units natural

\begin{aligned}\frac{\partial {}}{\partial {t}} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} - \boldsymbol{\nabla} \cdot \left( \frac{c}{4 \pi} \mathbf{B} \times \mathbf{E} \right) = - \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.37)

We define the cross product term as the Poynting vector

\begin{aligned}\mathbf{S} &= \frac{c}{4 \pi} \mathbf{B} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.38)

Suppose we integrate over a spatial volume. This gives us

\begin{aligned}\frac{\partial {}}{\partial {t}}\int_V d^3 \mathbf{x} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} - \int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} = - \int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.39)

Our Poynting integral can be converted to a surface integral utilizing Stokes theorem

\begin{aligned}\int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} = \int_{\partial V} d^2 \sigma \mathbf{n} \cdot \mathbf{S} =\int_{\partial V} d^2 \boldsymbol{\sigma} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(5.40)

We make the interpretations

\begin{aligned}\int_V d^3 \mathbf{x} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} &= \mbox{energy} \\ \int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} &= \mbox{momentum change through surface per unit time} \\ - \int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j} &= \mbox{work done}\end{aligned}

\paragraph{Justifying the sign, and clarifying work done by what, above.}

Recall that the energy term of the Lorentz force equation was

\begin{aligned}\frac{d{{\mathcal{E}_{\text{kinetic}}}}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(5.41)


\begin{aligned}\mathbf{j} = e \rho \mathbf{v}\end{aligned} \hspace{\stretch{1}}(5.42)


\begin{aligned}\int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j}\end{aligned} \hspace{\stretch{1}}(5.43)

represents the rate of change of kinetic energy of the charged particles as they move through through a field. If this is positive, then the charge distribution has gained energy. The negation of this quantity would represent energy transfer to the field from the charge distribution, the work done \underline{on the field} by the charge distribution.

Aside: As a four vector relationship.

In tutorial today (after this lecture, but before typing up these lecture notes in full), we used \mathcal{U} for the energy density term above

\begin{aligned}\mathcal{U} = \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} .\end{aligned} \hspace{\stretch{1}}(5.44)

This allows us to group the quantities in our conservation relationship above nicely

\begin{aligned}\frac{\partial {\mathcal{U}}}{\partial {t}} - \boldsymbol{\nabla} \cdot \mathbf{S} = - \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.45)

It appears natural to write 5.45 in the form of a four divergence. Suppose we define

\begin{aligned}P^i = (\mathcal{U}, \mathbf{S}/c^2)\end{aligned} \hspace{\stretch{1}}(5.46)

then we have

\begin{aligned}\partial_i P^i = - c \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.47)

Since the LHS has the appearance of a four scalar, this seems to imply that \mathbf{E} \cdot \mathbf{j} is a Lorentz invariant. It is curious that we have only the four scalar that comes from the energy term of the Lorentz force on the RHS of the conservation relationship. Peeking ahead at the text, this appears to be why a rank two energy tensor T^{ij} is introduced. For a relativistically natural quantity, we ought to have a conservation relationship also associated with each of the momentum change components of the four vector Lorentz force equation too.


[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Energy and momentum for Complex electric and magnetic field phasors.

Posted by peeterjoot on December 15, 2009

[Click here for a PDF of this post with nicer formatting]


In [1] a complex phasor representations of the electric and magnetic fields is used

\begin{aligned}\mathbf{E} &= \boldsymbol{\mathcal{E}} e^{-i\omega t} \\ \mathbf{B} &= \mathbf{B} e^{-i\omega t}.\end{aligned} \quad\quad\quad(1)

Here the vectors \boldsymbol{\mathcal{E}} and \mathbf{B} are allowed to take on complex values. Jackson uses the real part of these complex vectors as the true fields, so one is really interested in just these quantities

\begin{aligned}\text{Real} \mathbf{E} &= \boldsymbol{\mathcal{E}}_r \cos(\omega t) + \boldsymbol{\mathcal{E}}_i \sin(\omega t) \\ \text{Real} \mathbf{B} &= \mathbf{B}_r \cos(\omega t) + \mathbf{B}_i \sin(\omega t),\end{aligned} \quad\quad\quad(3)

but carry the whole thing in manipulations to make things simpler. It is stated that the energy for such complex vector fields takes the form (ignoring constant scaling factors and units)

\begin{aligned}\text{Energy} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + \mathbf{B} \cdot {\mathbf{B}}^{*}.\end{aligned} \quad\quad\quad(5)

In some ways this is an obvious generalization. Less obvious is how this and the Poynting vector are related in their corresponding conservation relationships.

Here I explore this, employing a Geometric Algebra representation of the energy momentum tensor based on the real field representation found in [2]. Given the complex valued fields and a requirement that both the real and imaginary parts of the field satisfy Maxwell’s equation, it should be possible to derive the conservation relationship between the energy density and Poynting vector from first principles.

Review of GA formalism for real fields.

In SI units the Geometric algebra form of Maxwell’s equation is

\begin{aligned}\nabla F &= J/\epsilon_0 c,\end{aligned} \quad\quad\quad(6)

where one has for the symbols

\begin{aligned}F &= \mathbf{E} + c I \mathbf{B} \\ I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 \\ \mathbf{E} &= E^k \gamma_k \gamma_0  \\ \mathbf{B} &= B^k \gamma_k \gamma_0  \\ (\gamma^0)^2 &= -(\gamma^k)^2 = 1 \\ \gamma^\mu \cdot \gamma_\nu &= {\delta^\mu}_\nu \\ J &= c \rho \gamma_0 + J^k \gamma_k \\ \nabla &= \gamma^\mu \partial_\mu = \gamma^\mu {\partial {}}/{\partial {x^\mu}}.\end{aligned} \quad\quad\quad(7)

The symmetric electrodynamic energy momentum tensor for real fields \mathbf{E} and \mathbf{B} is

\begin{aligned}T(a) &= \frac{-\epsilon_0}{2} F a F = \frac{\epsilon_0}{2} F a \tilde{F}.\end{aligned} \quad\quad\quad(15)

It may not be obvious that this is in fact a four vector, but this can be seen since it can only have grade one and three components, and also equals its reverse implying that the grade three terms are all zero. To illustrate this explicitly consider the components of T^{\mu 0}

\begin{aligned}\frac{2}{\epsilon_0} T(\gamma^0) &= -(\mathbf{E} + c I \mathbf{B}) \gamma^0 (\mathbf{E} + c I \mathbf{B}) \\ &= (\mathbf{E} + c I \mathbf{B}) (\mathbf{E} - c I \mathbf{B}) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2 + c I (\mathbf{B} \mathbf{E} - \mathbf{E} \mathbf{B})) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c I ( \mathbf{B} \wedge \mathbf{E} ) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c ( \mathbf{E} \times \mathbf{B} ) \gamma^0 \\ \end{aligned}

Our result is a four vector in the Dirac basis as expected

\begin{aligned}T(\gamma^0) &= T^{\mu 0} \gamma_\mu \\ T^{0 0} &= \frac{\epsilon_0}{2} (\mathbf{E}^2 + c^2 \mathbf{B}^2) \\ T^{k 0} &= c \epsilon_0 (\mathbf{E} \times \mathbf{B})_k \end{aligned} \quad\quad\quad(16)

Similar expansions are possible for the general tensor components T^{\mu\nu} but lets defer this more general expansion until considering complex valued fields. The main point here is to remind oneself how to express the energy momentum tensor in a fashion that is natural in a GA context. We also know that one has a conservation relationship associated with the divergence of this tensor \nabla \cdot T(a) (ie. \partial_\mu T^{\mu\nu}), and want to rederive this relationship after guessing what form the GA expression for the energy momentum tensor takes when one allows the field vectors to take complex values.

Computing the conservation relationship for complex field vectors.

As in 5, if one wants

\begin{aligned}T^{0 0} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*},\end{aligned} \quad\quad\quad(19)

it is reasonable to assume that our energy momentum tensor will take the form

\begin{aligned}T(a) &= \frac{\epsilon_0}{4} \left( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \right)= \frac{\epsilon_0}{2} \text{Real} \left( {{F}}^{*} a \tilde{F} \right)\end{aligned} \quad\quad\quad(20)

For real vector fields this reduces to the previous results and should produce the desired mix of real and imaginary dot products for the energy density term of the tensor. This is also a real four vector even when the field is complex, so the energy density and power density terms will all be real valued, which seems desirable.

Expanding the tensor. Easy parts.

As with real fields expansion of T(a) in terms of \mathbf{E} and \mathbf{B} is simplest for a = \gamma^0. Let’s start with that.

\begin{aligned}\frac{4}{\epsilon_0} T(\gamma^0) \gamma_0&=-({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} )\gamma^0 (\mathbf{E} + c I \mathbf{B}) \gamma_0-(\mathbf{E} + c I \mathbf{B} )\gamma^0 ({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) \gamma_0 \\ &=({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) (\mathbf{E} - c I \mathbf{B}) +(\mathbf{E} + c I \mathbf{B} ) ({\mathbf{E}}^{*} - c I {\mathbf{B}}^{*} ) \\ &={\mathbf{E}}^{*} \mathbf{E} + \mathbf{E} {\mathbf{E}}^{*} + c^2 ({\mathbf{B}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{B}}^{*} ) + c I ( {\mathbf{B}}^{*} \mathbf{E} - {\mathbf{E}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{E}}^{*} - \mathbf{E} {\mathbf{B}}^{*} ) \\ &=2 \mathbf{E} \cdot {\mathbf{E}}^{*} + 2 c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}+ 2 c ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ).\end{aligned}

This gives

\begin{aligned}T(\gamma^0) &=\frac{\epsilon_0}{2} \left( \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*} \right) \gamma^0+ \frac{\epsilon_0 c}{2} ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ) \gamma^0\end{aligned} \quad\quad\quad(21)

The sum of {{F}}^{*} a F and its conjugate has produced the desired energy density expression. An implication of this is that one can form and take real parts of a complex Poynting vector \mathbf{S} \propto \mathbf{E} \times {\mathbf{B}}^{*} to calculate the momentum density. This is stated but not demonstrated in Jackson, perhaps considered too obvious or messy to derive.

Observe that the a choice to work with complex valued vector fields gives a nice consistency, and one has the same factor of 1/2 in both the energy and momentum terms. While the energy term is obviously real, the momentum terms can be written in an explicitly real notation as well since one has a quantity plus its conjugate. Using a more conventional four vector notation (omitting the explicit Dirac basis vectors), one can write this out as a strictly real quantity.

\begin{aligned}T(\gamma^0) &=\epsilon_0 \Bigl( \frac{1}{{2}}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}),c \text{Real}( \mathbf{E} \times {\mathbf{B}}^{*} ) \Bigr)\end{aligned} \quad\quad\quad(22)

Observe that when the vector fields are restricted to real quantities, the conjugate and real part operators can be dropped and the real vector field result 16 is recovered.

Expanding the tensor. Messier parts.

I intended here to compute T(\gamma^k), and my starting point was a decomposition of the field vectors into components that anticommute or commute with \gamma^k

\begin{aligned}\mathbf{E} &= \mathbf{E}_\parallel + \mathbf{E}_\perp \\ \mathbf{B} &= \mathbf{B}_\parallel + \mathbf{B}_\perp.\end{aligned} \quad\quad\quad(23)

The components parallel to the spatial vector \sigma_k = \gamma_k \gamma_0 are anticommuting \gamma^k \mathbf{E}_\parallel = -\mathbf{E}_\parallel \gamma^k, whereas the perpendicular components commute \gamma^k \mathbf{E}_\perp = \mathbf{E}_\perp \gamma^k. The expansion of the tensor products is then

\begin{aligned}({{F}}^{*} \gamma^k \tilde{F} + \tilde{F} \gamma^k {{F}}^{*}) \gamma_k&= - ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) \gamma^k ( \mathbf{E}_\parallel + \mathbf{E}_\perp + c I ( \mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \gamma_k \\ &- (\mathbf{E} + I c \mathbf{B}) \gamma^k ( {\mathbf{E}_\parallel}^{*} + {\mathbf{E}_\perp}^{*} + c I ( {\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \gamma_k \\ &=  ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) ( \mathbf{E}_\parallel - \mathbf{E}_\perp + c I ( -\mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \\ &+ (\mathbf{E} + I c \mathbf{B}) ( {\mathbf{E}_\parallel}^{*} - {\mathbf{E}_\perp}^{*} + c I ( -{\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \\ \end{aligned}

This isn’t particularly pretty to expand out. I did attempt it, but my result looked wrong. For the application I have in mind I do not actually need anything more than T^{\mu 0}, so rather than show something wrong, I’ll just omit it (at least for now).

Calculating the divergence.

Working with 20, let’s calculate the divergence and see what one finds for the corresponding conservation relationship.

\begin{aligned}\frac{4}{\epsilon_0} \nabla \cdot T(a) &=\left\langle{{ \nabla ( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} )}}\right\rangle \\ &=-\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} a + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F a }}\right\rangle \\ &=-{\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F }}\right\rangle}_{1} \cdot a \\ &=-{\left\langle{{ F \stackrel{ \rightarrow }\nabla {{F}}^{*} +F \stackrel{ \leftarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftarrow }\nabla F+ {{F}}^{*} \stackrel{ \rightarrow }\nabla F}}\right\rangle}_{1} \cdot a \\ &=-\frac{1}{{\epsilon_0 c}} {\left\langle{{ F {{J}}^{*} - J {{F}}^{*} - {{J}}^{*} F+ {{F}}^{*} J}}\right\rangle}_{1} \cdot a \\ &= \frac{2}{\epsilon_0 c} a \cdot ( J \cdot {{F}}^{*} + {{J}}^{*} \cdot F) \\ &= \frac{4}{\epsilon_0 c} a \cdot \text{Real} ( J \cdot {{F}}^{*} ).\end{aligned}

We have then for the divergence

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(25)

Lets write out J \cdot {{F}}^{*} in the (stationary) observer frame where J = (c\rho + \mathbf{J}) \gamma_0. This is

\begin{aligned}J \cdot {{F}}^{*} &={\left\langle{{ (c\rho + \mathbf{J}) \gamma_0 ( {\mathbf{E}}^{*} + I c {\mathbf{B}}^{*} ) }}\right\rangle}_{1} \\ &=- (\mathbf{J} \cdot {\mathbf{E}}^{*} ) \gamma_0- c \left( \rho {\mathbf{E}}^{*} + \mathbf{J} \times {\mathbf{B}}^{*}\right) \gamma_0\end{aligned}

Writing out the four divergence relationships in full one has

\begin{aligned}\nabla \cdot T(\gamma^0) &= - \frac{1}{{ c }} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \\ \nabla \cdot T(\gamma^k) &= - \text{Real} \left( \rho {{(E^k)}}^{*} + (\mathbf{J} \times {\mathbf{B}}^{*})_k \right)\end{aligned} \quad\quad\quad(26)

Just as in the real field case one has a nice relativistic split into energy density and force (momentum change) components, but one has to take real parts and conjugate half the terms appropriately when one has complex fields.

Combining the divergence relation for T(\gamma^0) with 22 the conservation relation for this subset of the energy momentum tensor becomes

\begin{aligned}\frac{1}{{c}} \frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ c \epsilon_0 \text{Real} \boldsymbol{\nabla} \cdot (\mathbf{E} \times {\mathbf{B}}^{*} )=- \frac{1}{{c}} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \end{aligned} \quad\quad\quad(28)


\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \text{Real} \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0\end{aligned} \quad\quad\quad(29)

It is this last term that puts some meaning behind Jackson’s treatment since we now know how the energy and momentum are related as a four vector quantity in this complex formalism.

While I’ve used geometric algebra to get to this final result, I would be interested to compare how the intermediate mess compares with the same complex field vector result obtained via traditional vector techniques. I am sure I could try this myself, but am not interested enough to attempt it.

Instead, now that this result is obtained, proceeding on to application is now possible. My intention is to try the vacuum electromagnetic energy density example from [3] using complex exponential Fourier series instead of the doubled sum of sines and cosines that Bohm used.


[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , | 3 Comments »