Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for March, 2011

PHY450H1S. Relativistic Electrodynamics Lecture 24 (Taught by Prof. Erich Poppitz). Non-relativistic electrostatic Lagrangian.

Posted by peeterjoot on March 30, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 5 \S 37, and chapter 8 \S 65 material from the text [1].

Covering pp. 181-195: the Lagrangian for a system of non relativistic charged particles to zeroth order in (v/c): electrostatic energy of a system of charges and .mass renormalization.

A closed system of charged particles.

Consider a closed system of charged particles (m_a, q_a) and imagine there is a frame where they are non-relativistic v_a/c \ll 1. In this case we can describe the dynamics using a Lagrangian only for particles. i.e.

\begin{aligned}\mathcal{L} = \mathcal{L}( \mathbf{x}_1, \cdots, \mathbf{x}_N, \mathbf{v}_1, \cdots, \mathbf{v}_N)\end{aligned} \hspace{\stretch{1}}(2.1)

If we work t order (v/c)^2.

If we try to go to O((v/c)^3, it’s difficult to only use \mathcal{L} for particles.

This can be inferred from

\begin{aligned}P = \frac{2}{3} \frac{e^2}{c^3} {\left\lvert{\dot{d}{\mathbf{d}}}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.2)

because at this order, due to radiation effects, we need to include EM field as dynamical.

Start simple

Start with a system of (non-relativistic) free particles

\begin{aligned}S \end{aligned}

So in the non-relativistic limit, after dropping the constant term that doesn’t effect the dynamics, our Lagrangian is

\begin{aligned}\mathcal{L}(\mathbf{x}_a, \mathbf{v}_a) = \frac{1}{{2}} \sum_a m_a \mathbf{v}_a^2 - \frac{1}{{8}} \frac{m_a \mathbf{v}_a^4}{c^2}\end{aligned} \hspace{\stretch{1}}(3.3)

The first term is O((v/c)^0) where the second is O((v/c)^2).

Next include the fact that particles are charged.

\begin{aligned}\mathcal{L}_{\text{interaction}} = \sum_a \left( \cancel{q_a \frac{\mathbf{v}_a}{c} \cdot \mathbf{A}(\mathbf{x}_a, t)} - q_a \phi(\mathbf{x}_a, t) \right)\end{aligned} \hspace{\stretch{1}}(3.4)

Here, working to O((v/c)^0), where we consider the particles moving so slowly that we have only a Coulomb potential \phi, not \mathbf{A}.

HERE: these are NOT ‘EXTERNAL’ potentials. They are caused by all the charged particles.

\begin{aligned}\partial_i F^{i l} = \frac{4 \pi}{c} j^l = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(3.5)

For l = \alpha we have have 4 \pi \rho \mathbf{v}/c, but we won’t do this today (tomorrow).

To leading order in v/c, particles only created Coulomb fields and they only “feel” Coulomb fields. Hence to O((v/c)^0), we have

\begin{aligned}\mathcal{L} = \sum_a \frac{m_a \mathbf{v}_a^2}{2} - q_a \phi(\mathbf{x}_a, t)\end{aligned} \hspace{\stretch{1}}(3.6)

What’s the \phi(\mathbf{x}_a, t), the Coulomb field created by all the particles.

\paragraph{How to find?}

\begin{aligned}\partial_i F^{i 0} = \frac{4 \pi}{c} = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(3.7)

or

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \rho = - \boldsymbol{\nabla}^2 \phi \end{aligned} \hspace{\stretch{1}}(3.8)

where

\begin{aligned}\rho(\mathbf{x}, t) = \sum_a q_a \delta^3 (\mathbf{x} - \mathbf{x}_a(t))\end{aligned} \hspace{\stretch{1}}(3.9)

This is a Poisson equation

\begin{aligned}\Delta \phi(\mathbf{x}) = \sum_a q_a 4 \pi \delta^3(\mathbf{x} - \mathbf{x}_a)\end{aligned} \hspace{\stretch{1}}(3.10)

(where the time dependence has been suppressed). This has solution

\begin{aligned}\phi(\mathbf{x}, t) = \sum_b \frac{q_b}{{\left\lvert{\mathbf{x} - \mathbf{x}_b(t)}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.11)

This is the sum of instantaneous Coulomb potentials of all particles at the point of interest. Hence, it appears that \phi(\mathbf{x}_a, t) should be evaluated in 3.11 at \mathbf{x}_a?

However 3.11 becomes infinite due to contributions of the a-th particle itself. Solution to this is to drop the term, but let’s discuss this first.

Let’s talk about the electrostatic energy of our system of particles.

\begin{aligned}\mathcal{E} &= \frac{1}{{8 \pi}} \int d^3 \mathbf{x} \left(\mathbf{E}^2 + \cancel{\mathbf{B}^2} \right) \\ &= \frac{1}{{8 \pi}} \int d^3 \mathbf{x} \mathbf{E} \cdot (-\boldsymbol{\nabla} \phi) \\ &= \frac{1}{{8 \pi}} \int d^3 \mathbf{x} \left( \boldsymbol{\nabla} \cdot (\mathbf{E} \phi) - \phi \boldsymbol{\nabla} \cdot \mathbf{E} \right) \\ &= -\frac{1}{{8 \pi}} \oint d^2 \boldsymbol{\sigma} \cdot \mathbf{E} \phi + \frac{1}{{8 \pi}} \int d^3 \mathbf{x} \phi \boldsymbol{\nabla} \cdot \mathbf{E}  \\ \end{aligned}

The first term is zero since \mathbf{E} \phi for a localized system of charges \sim 1/r^3 or higher as V \rightarrow \infty.

In the second term

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \sum_a q_a \delta^3(\mathbf{x} - \mathbf{x}_a(t))\end{aligned} \hspace{\stretch{1}}(3.12)

So we have

\begin{aligned}\sum_a \frac{1}{{2}} \int d^3 \mathbf{x} q_a \delta^3(\mathbf{x} - \mathbf{x}_a) \phi(\mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.13)

for

\begin{aligned}\mathcal{E} = \frac{1}{{2}} \sum_a q_a \phi(\mathbf{x}_a)\end{aligned} \hspace{\stretch{1}}(3.14)

Now substitute 3.11 into 3.14 for

\begin{aligned}\mathcal{E} = \frac{1}{{2}} \sum_a \frac{q_a^2}{{\left\lvert{\mathbf{x} - \mathbf{x}_a}\right\rvert}} + \frac{1}{{2}} \sum_{a \ne b} \frac{q_a q_b}{{\left\lvert{\mathbf{x}_a - \mathbf{x}_b}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.15)

or

\begin{aligned}\mathcal{E} = \frac{1}{{2}} \sum_a \frac{q_a^2}{{\left\lvert{\mathbf{x} - \mathbf{x}_a}\right\rvert}} + \sum_{a < b} \frac{q_a q_b}{{\left\lvert{\mathbf{x}_a - \mathbf{x}_b}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.16)

The first term is the sum of the electrostatic self energies of all particles. The source of this infinite self energy is in assuming a \underline{point like nature} of the particle. i.e. We modeled the charge using a delta function instead of using a continuous charge distribution.

Recall that if you have a charged sphere of radius r

PICTURE: total charge q, radius r, our electrostatic energy is

\begin{aligned}\mathcal{E} \sim \frac{q^2}{r}\end{aligned} \hspace{\stretch{1}}(3.17)

Stipulate that rest energy m_e c^2 is all of electrostatic origin \sim e^2/r_e we get that

\begin{aligned}r_e \sim \frac{e^2}{m_e c^2}\end{aligned} \hspace{\stretch{1}}(3.18)

This is called the classical radius of the electron, and is of a very small scale 10^{-13} \text{cm}.

As a matter of fact the applicability of classical electrodynamics breaks down much sooner than this scale since quantum effects start kicking in.

Our Lagrangian is now

\begin{aligned}\mathcal{L}_a = \frac{1}{{2}} m_a \mathbf{v}_a^2 - q_a \phi(\mathbf{x}_a, t)\end{aligned} \hspace{\stretch{1}}(3.19)

where \phi is the electrostatic potential due to all \underline{other} particles, so we have

\begin{aligned}\mathcal{L}_a = \frac{1}{{2}} m_a \mathbf{v}_a^2 - \sum_{a \ne b} \frac{q_a q_b }{{\left\lvert{\mathbf{x}_a - \mathbf{x}_b}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.20)

and for the system

\begin{aligned}\mathcal{L} = \frac{1}{{2}} \sum_a m_a \mathbf{v}_a^2 - \sum_{a < b} \frac{q_a q_b }{{\left\lvert{\mathbf{x}_a - \mathbf{x}_b}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.21)

This is THE Lagrangian for electrodynamics in the non-relativistic case, starting with the relativistic action.

What’s next?

We continue to the next order of v/c tomorrow.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Advertisements

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 21 (Taught by Prof. Erich Poppitz). More on EM fields due to dipole radiation.

Posted by peeterjoot on March 25, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 147-165: radiated power (154); fields in the “wave zone” and discussions of approximations made (155-159); EM fields due to electric dipole radiation (160-163); Poynting vector, angular distribution, and power of dipole radiation (164-165) [Wednesday, Mar. 16…]

Where we left off.

For a localized charge distribution, we’d arrived at expressions for the scalar and vector potentials far from the point where the charges and currents were localized. This was then used to consider the specific case of a dipole system where one of the charges had a sinusoidal oscillation. The charge positions for the negative and positive charges respectively were

\begin{aligned}z_{-} &= 0 \\ z_{+} &= \mathbf{e}_3( z_0 + a \sin(\omega t)) ,\end{aligned} \hspace{\stretch{1}}(2.1)

so that our dipole moment \mathbf{d} = \int \rho(\mathbf{x}') \mathbf{x}' is

\begin{aligned}\mathbf{d} = \mathbf{e}_3 q (z_0 + a \sin(\omega t)).\end{aligned} \hspace{\stretch{1}}(2.3)

The scalar potential, to first order in a number of Taylor expansions at our point far from the source, evaluated at the retarded time t_r = t - {\left\lvert{\mathbf{x}}\right\rvert}/c, was found to be

\begin{aligned}A^0(\mathbf{x}, t) = \frac{z q}{{\left\lvert{\mathbf{x}}\right\rvert}^3} ( z_0 + a \sin(\omega t_r) ) + \frac{z q}{c {\left\lvert{\mathbf{x}}\right\rvert}^2} a \omega \cos(\omega t_r),\end{aligned} \hspace{\stretch{1}}(2.4)

and our vector potential, also with the same approximations, was

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert} }} \mathbf{e}_3 q a \omega \cos(\omega t_r).\end{aligned} \hspace{\stretch{1}}(2.5)

We found that the electric field (neglecting any non-radiation terms that died off as inverse square in the distance) was

\begin{aligned}\mathbf{E} = \frac{a \omega^2 q}{c^2 {\left\lvert{\mathbf{x}}\right\rvert}} \sin( \omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c) ) \left( \mathbf{e}_3 - \hat{\mathbf{r}} \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} \right).\end{aligned} \hspace{\stretch{1}}(2.6)

Direct computation of the magnetic radiation field

Taking the curl of the vector potential 2.6 for the magnetic field, we’ll neglect the contribution from the 1/{\left\lvert{\mathbf{x}}\right\rvert} since that will be inverse square, and die off too quickly far from the source

\begin{aligned}\mathbf{B}&= \boldsymbol{\nabla} \times \mathbf{A} \\ &= \boldsymbol{\nabla} \times \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert} }} \mathbf{e}_3 q a \omega \cos(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)) \\ &\approx - \frac{q a \omega}{c {\left\lvert{\mathbf{x}}\right\rvert} } \mathbf{e}_3 \times \boldsymbol{\nabla} \cos(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)) \\ &= - \frac{q a \omega}{c {\left\lvert{\mathbf{x}}\right\rvert} } \left( -\frac{\omega}{c} \right)(\mathbf{e}_3 \times \boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}) \sin(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)),\end{aligned}

which is

\begin{aligned}\mathbf{B} = \frac{q a \omega^2}{c^2 {\left\lvert{\mathbf{x}}\right\rvert} } (\mathbf{e}_3 \times \hat{\mathbf{r}}) \sin(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)).\end{aligned} \hspace{\stretch{1}}(3.7)

Comparing to 2.6, we see that this equals \hat{\mathbf{r}} \times \mathbf{E} as expected.

An aside: A tidier form for the electric dipole field

We can rewrite the electric field 2.6 in terms of the retarded time dipole

\begin{aligned}\mathbf{E} = \frac{1}{{c^2 {\left\lvert{\mathbf{x}}\right\rvert}}} \Bigl( -\dot{d}{\mathbf{d}}(t_r) + \hat{\mathbf{r}} ( \dot{d}{\mathbf{d}}(t_r) \cdot \hat{\mathbf{r}} ) \Bigr),\end{aligned} \hspace{\stretch{1}}(4.8)

where

\begin{aligned}\dot{d}{\mathbf{d}}(t) = - q a \omega^2 \sin(\omega t) \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(4.9)

Then using the vector identity

\begin{aligned}(\mathbf{A} \times \hat{\mathbf{r}} ) \times \hat{\mathbf{r}} = -\mathbf{A} + (\hat{\mathbf{r}} \cdot \mathbf{A}) \hat{\mathbf{r}},\end{aligned} \hspace{\stretch{1}}(4.10)

we have for the fields

\begin{aligned}\boxed{\begin{aligned}\mathbf{E} &= \frac{1}{{c^2 {\left\lvert{\mathbf{x}}\right\rvert}}} (\dot{d}{\mathbf{d}}(t_r) \times \hat{\mathbf{r}}) \times \hat{\mathbf{r}} \\ \mathbf{B} &= \hat{\mathbf{r}} \times \mathbf{E}.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(4.11)

Calculating the energy flux

Our Poynting vector, the energy flux, is

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} =\frac{c}{4 \pi}\left( \frac{q a \omega^2}{c^2 {\left\lvert{\mathbf{x}}\right\rvert} } \right)^2\sin^2(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c))\left( \mathbf{e}_3 - \hat{\mathbf{r}} \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \times (\hat{\mathbf{r}} \times \mathbf{e}_3).\end{aligned} \hspace{\stretch{1}}(5.12)

Expanding just the cross terms we have

\begin{aligned}\left( \mathbf{e}_3 - \hat{\mathbf{r}} \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \times (\hat{\mathbf{r}} \times \mathbf{e}_3)&=-(\hat{\mathbf{r}} \times \mathbf{e}_3) \times \mathbf{e}_3 - \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} (\mathbf{e}_3 \times \hat{\mathbf{r}}) \times \hat{\mathbf{r}} \\ &=-(-\hat{\mathbf{r}} + \mathbf{e}_3(\mathbf{e}_3 \cdot \hat{\mathbf{r}}) ) - \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} (-\mathbf{e}_3 + \hat{\mathbf{r}} (\hat{\mathbf{r}} \cdot \mathbf{e}_3)) \\ &=\hat{\mathbf{r}} - \cancel{\mathbf{e}_3(\mathbf{e}_3 \cdot \hat{\mathbf{r}})} + \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert}} (\cancel{\mathbf{e}_3} - \hat{\mathbf{r}} (\hat{\mathbf{r}} \cdot \mathbf{e}_3)) \\ &=\hat{\mathbf{r}}( 1 - (\hat{\mathbf{r}} \cdot \mathbf{e}_3)^2 ).\end{aligned}

Note that we’ve utilized \hat{\mathbf{r}} \cdot \mathbf{e}_3 = z/{\left\lvert{\mathbf{x}}\right\rvert} to do the cancellations above, and for the final grouping. Since \hat{\mathbf{r}} \cdot \mathbf{e}_3 = \cos\theta, the direction cosine of the unit radial vector with the z-axis, we have for the direction of the Poynting vector

\begin{aligned}\hat{\mathbf{r}}( 1 - (\hat{\mathbf{r}} \cdot \mathbf{e}_3)^2 )&= \hat{\mathbf{r}} (1 - \cos^2\theta) \\ &= \hat{\mathbf{r}} \sin^2\theta.\end{aligned}

Our Poynting vector is found to be directed radially outwards, and is

\begin{aligned}\mathbf{S} =\frac{c}{4 \pi}\left( \frac{q a \omega^2}{c^2 {\left\lvert{\mathbf{x}}\right\rvert} } \right)^2\sin^2(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)) \sin^2\theta \hat{\mathbf{r}}.\end{aligned} \hspace{\stretch{1}}(5.13)

The intensity is constant along the curves

\begin{aligned}{\left\lvert{\sin\theta}\right\rvert} \sim r\end{aligned} \hspace{\stretch{1}}(5.14)

PICTURE: dipole lobes diagram with \mathbf{d} up along the z axis, and \hat{\mathbf{r}} pointing in an arbitrary direction.

FIXME: understand how this lobes picture comes from our result above.

PICTURE: field diagram along spherical north-south great circles, and the electric field \mathbf{E} along what looks like it is the \hat{\boldsymbol{\theta}} direction, and \mathbf{B} along what appear to be the \hat{\boldsymbol{\phi}} direction, and \mathbf{S} pointing radially out.

Utilizing the spherical unit vectors to express the field directions.

In class we see the picture showing these spherical unit vector directions. We can see this algebraically as well. Recall that we have for our unit vectors

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 \sin\theta \cos\phi + \mathbf{e}_2 \sin\theta \sin\phi + \mathbf{e}_3 \cos\theta \\ \hat{\boldsymbol{\phi}} &= \sin\theta ( \mathbf{e}_2 \cos\phi - \mathbf{e}_1 \sin\phi ) \\ \hat{\boldsymbol{\theta}} &= \cos\theta ( \mathbf{e}_1 \cos\phi + \mathbf{e}_2 \sin\phi ) - \mathbf{e}_3 \sin\theta,\end{aligned} \hspace{\stretch{1}}(5.15)

with the volume element orientation governed by cyclic permutations of

\begin{aligned}\hat{\mathbf{r}} \times \hat{\boldsymbol{\theta}} = \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(5.18)

We can now express the direction of the magnetic field in terms of the spherical unit vectors

\begin{aligned}\mathbf{e}_3 \times \hat{\mathbf{r}}&=\mathbf{e}_3 \times (\mathbf{e}_1 \sin\theta \cos\phi + \mathbf{e}_2 \sin\theta \sin\phi + \mathbf{e}_3 \cos\theta ) \\ &=\mathbf{e}_3 \times (\mathbf{e}_1 \sin\theta \cos\phi + \mathbf{e}_2 \sin\theta \sin\phi ) \\ &=\mathbf{e}_2 \sin\theta \cos\phi - \mathbf{e}_1 \sin\theta \sin\phi  \\ &=\sin\theta ( \mathbf{e}_2 \cos\phi - \mathbf{e}_1 \sin\phi ) \\ &=\sin\theta \hat{\boldsymbol{\phi}}.\end{aligned}

The direction of the electric field was in the direction of (\dot{d}{\mathbf{d}} \times \hat{\mathbf{r}}) \times \hat{\mathbf{r}} where \mathbf{d} was directed along the z-axis. This is then

\begin{aligned}(\mathbf{e}_3 \times \hat{\mathbf{r}}) \times \hat{\mathbf{r}}&=-\sin\theta \hat{\boldsymbol{\phi}} \times \hat{\mathbf{r}} \\ &=-\sin\theta \hat{\boldsymbol{\theta}}\end{aligned}

\begin{aligned}\boxed{\begin{aligned}\mathbf{E} &= \frac{ q a \omega^2 }{c^2 {\left\lvert{\mathbf{x}}\right\rvert}} \sin(\omega t_r) \sin\theta \hat{\boldsymbol{\theta}} \\ \mathbf{B} &= -\frac{ q a \omega^2 }{c^2 {\left\lvert{\mathbf{x}}\right\rvert}} \sin(\omega t_r) \sin\theta \hat{\boldsymbol{\phi}} \\ \mathbf{S} &= \left( \frac{ q a \omega^2 }{c^2 {\left\lvert{\mathbf{x}}\right\rvert}} \right)^2 \sin^2(\omega t_r) \sin^2\theta \hat{\mathbf{r}} \end{aligned}}\end{aligned} \hspace{\stretch{1}}(5.19)

Calculating the power

Integrating \mathbf{S} over a spherical surface, we can calculate the power

FIXME: remind myself why Power is an appropriate label for this integral.

This is

\begin{aligned}P(r, t)&= \oint d^2 \boldsymbol{\sigma} \cdot \mathbf{S} \\ &= \int \cancel{r^2} \sin\theta d\theta d\phi \frac{c}{4 \pi}\left( \frac{q a \omega^2}{c^2 \cancel{{\left\lvert{\mathbf{x}}\right\rvert}} } \right)^2\sin^2(\omega (t - {\left\lvert{\mathbf{x}}\right\rvert}/c)) \sin^2\theta \\ &=\frac{q^2 a^2 \omega^4}{2 c^3 }\sin^2(\omega (t - r/c))\underbrace{\int \sin^3\theta d\theta}_{=4/3}\end{aligned}

\begin{aligned}P(r, t) = \frac{2}{3} \frac{q^2 a^2 \omega^4}{c^3} \sin^2(\omega (t - r/c)) =\frac{q^2 a^2 \omega^4}{3 c^3} (1 - \cos(2 \omega (t - r/c))\end{aligned} \hspace{\stretch{1}}(6.20)

Averaging over a period kills off the cosine term

\begin{aligned}\left\langle{{P(r, t)}}\right\rangle = \frac{\omega}{2 \pi} \int_0^{2 \pi/\omega} dt P(t) = \frac{q^2 a^2 \omega^4}{3 c^3},\end{aligned} \hspace{\stretch{1}}(6.21)

and we once again see that higher frequencies radiate more power (i.e. why the sky is blue).

Types of radiation.

We’ve seen now radiation from localized current distributions, and called that electric dipole radiation. There are many other sources of electrodynamic radiation, of which here are a couple.

\begin{itemize}
\item Magnetic dipole radiation.

This will be covered more in more depth in the tutorial. Picture of a positive circulating current I = I_o \sin \omega t given, and a magnetic dipole moment \boldsymbol{\mu} = \pi b^2 I \mathbf{e}_3.

This sort of current loop is a source of magnetic dipole radiation.

\item Cyclotron radiation.

This is the label for acceleration induced radiation (at high velocities) by particles moving in a uniform magnetic field.

PICTURE: circular orbit with speed v = \omega r. The particle trajectories are

\begin{aligned}x &= r \cos \omega t \\ y &= r \sin \omega t\end{aligned} \hspace{\stretch{1}}(7.22)

This problem can be treated as two electric dipoles out of phase by 90 degrees.

PICTURE: 4 lobe dipole picture, with two perpendicular dipole moment arrows. Resulting superposition sort of smeared together.

\end{itemize}

Energy momentum conservation.

We’ve defined

\begin{aligned}\begin{array}{l l l}\mathcal{E} &= \frac{\mathbf{E}^2 + \mathbf{B}^2}{8\pi} & \mbox{Energy density} \\ \frac{\mathbf{S}}{c^2} &= \frac{1}{{4 \pi c}} \mathbf{E} \times \mathbf{B} & \mbox{Momentum density}\end{array}\end{aligned} \hspace{\stretch{1}}(8.24)

(where \mathbf{S} was defined as the energy flow).

Dimensional analysis arguments and analogy with classical mechanics were used to motivate these definitions, as opposed to starting with the field action to find these as a consequence of a symmetry. We also saw that we had a conservation relationship that had the appearance of a four divergence of a four vector. With

\begin{aligned}P^i = (\mathcal{U}/c, \mathbf{S}/c^2),\end{aligned} \hspace{\stretch{1}}(8.25)

that was

\begin{aligned}\partial_i P^i = - \mathbf{E} \cdot \mathbf{j}/c^2\end{aligned} \hspace{\stretch{1}}(8.26)

The left had side has the appearance of a Lorentz scalar, since it contracts two four vectors, but the right hand side is the continuum equivalent to the energy term of the Lorentz force law and cannot be a Lorentz scalar. The conclusion has to be that P^i is not a four vector, and it’s natural to assume that these are components of a rank 2 four tensor instead (since we’ve got just one component of a rank 1 four tensor on the RHS). We want to know find out how the EM energy and momentum densities transform.

Classical mechanics reminder.

Recall that in particle mechanics when we had a Lagrangian that had no explicit time dependence

\begin{aligned}\mathcal{L}(q, \dot{q}, \cancel{t}),\end{aligned} \hspace{\stretch{1}}(8.27)

that energy resulted from time translation invariance. We found this by taking the full derivative of the Lagrangian, and employing the EOM for the system to find a conserved quantity

\begin{aligned}\frac{d{{}}}{dt} \mathcal{L}(q, \dot{q}) &=\frac{\partial {\mathcal{L}}}{\partial {q}} \frac{\partial {q}}{\partial {t}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \frac{\partial {\dot{q}}}{\partial {t}} \\ &=\frac{d{{}}}{dt} \left( \frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \right) \dot{q}+\frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \dot{d}{q} \\ &=\frac{d{{}}}{dt} \left( \frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \dot{q} \right) \end{aligned}

Taking differences we have

\begin{aligned}\frac{d{{}}}{dt} \left( \frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \dot{q} -\mathcal{L} \right) = 0,\end{aligned} \hspace{\stretch{1}}(8.28)

and we labeled this conserved quantity the energy

\begin{aligned}\mathcal{E} = \frac{\partial {\mathcal{L}}}{\partial {\dot{q}}} \dot{q} -\mathcal{L} \end{aligned} \hspace{\stretch{1}}(8.29)

Our approach from the EM field action.

Our EM field action was

\begin{aligned}S = -\frac{1}{{16 \pi c}} \int d^4 x F_{i j} F^{i j}.\end{aligned} \hspace{\stretch{1}}(8.32)

The squared field tensor F_{i j} F^{i j} only depends on the fields A^i(\mathbf{x}, t) or its derivatives \partial_j A^i(\mathbf{x}, t), and not on the coordinates \mathbf{x}, t themselves. This is very similar to the particle action with no explicit time dependence

\begin{aligned}S = \int dt \left( \frac{m \dot{q}^2}{2} + V(q) \right).\end{aligned} \hspace{\stretch{1}}(8.32)

For the particle case we obtained our conservation relationship by taking time derivatives of the Lagrangian. These are very similar with the action having no explicit dependence on space or time, only on the field, so what will we get if we take the coordinate partials of the EM Lagrangian density?

We will chew on this tomorrow and calculate

\begin{aligned}\frac{\partial {}}{\partial {x^k}} \Bigl( F_{i j} F^{i j} \Bigr)\end{aligned} \hspace{\stretch{1}}(8.32)

in full gory details. We will find that instead of finding a single conserved quantity C^A(\mathbf{x}, t), we instead find a quantity that only changes through escape from the boundary of a surface.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 8 (TA: Simon Freedman). EM fields from magnetic dipole current.

Posted by peeterjoot on March 24, 2011

[Click here for a PDF of this post with nicer formatting]

Review.

Recall for the electric dipole we started with a system like

\begin{aligned}z_{+} &= 0 \\ z_{-} &= \mathbf{e}_3( z_0 + a \sin(\omega t))\end{aligned} \hspace{\stretch{1}}(1.1)

(we did it with the opposite polarity)

\begin{aligned}\mathbf{E} &= \frac{q a \omega^2}{c^2} \sin\omega t_o \sin\theta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} (- \hat{\boldsymbol{\theta}} ) = \frac{1}{{c^2 {\left\lvert{\mathbf{x}}\right\rvert}}} (\dot{d}{\mathbf{d}}(t_r) \times \hat{\mathbf{r}} ) \times \hat{\mathbf{r}}  \\ \mathbf{B} &= -\frac{q a \omega^2}{c^2} \sin\omega t_o \sin\theta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} (- \hat{\boldsymbol{\phi}} ) = \hat{\mathbf{r}} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.3)

This was after the multipole expansion (\lambda \gg l).

Physical analogy: a high and low frequency wave interacting. The low frequency wave becomes the envelope, and doesn’t really “see” the dynamics of the high frequency wave.

We also figured out the Poynting vector was

\begin{aligned}\mathbf{S} = \frac{c}{4\pi} \mathbf{E} \times \mathbf{B} = \hat{\mathbf{r}} \frac{ \sin^2 \theta {\left\lvert{\dot{d}{\mathbf{d}}(t_r) }\right\rvert}^2 }{ 4 \pi c^3 {\left\lvert{\mathbf{x}}\right\rvert}^2},\end{aligned} \hspace{\stretch{1}}(1.5)

and our Power was

\begin{aligned}\text{Power}(R) = \oint_{{S_R}^2} d^2 \boldsymbol{\sigma} \cdot \left\langle{\mathbf{S}}\right\rangle = \frac{ q^2 a^2 \omega^4 }{3 c^3}.\end{aligned} \hspace{\stretch{1}}(1.6)

Magnetic dipole

PICTURE: positively oriented current I circulating around the normal \mathbf{m} at radius b in the x-y plane. We have

(from third year)

\begin{aligned}{\left\lvert{\mathbf{m}}\right\rvert} = I \pi b^2.\end{aligned} \hspace{\stretch{1}}(2.7)

With the magnetic moment directed upwards along the z-axis

\begin{aligned}\mathbf{m} = I \pi b^2 \mathbf{e}_3,\end{aligned} \hspace{\stretch{1}}(2.8)

where we have a frequency dependence in the current

\begin{aligned}I = I_o \sin(\omega t).\end{aligned} \hspace{\stretch{1}}(2.9)

With no static charge distribution we have zero scalar potential

\begin{aligned}\rho = 0 \implies A^0 = 0.\end{aligned} \hspace{\stretch{1}}(2.10)

Our first moments approximation of the vector potential was

\begin{aligned}A^\alpha(\mathbf{x}, t) \approx \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert}}} \int d^3 \mathbf{x}' j^\alpha(\mathbf{x}', t) + O(\text{higher moments}).\end{aligned} \hspace{\stretch{1}}(2.11)

Now we use our new trick introducing a 1 = 1 to rewrite the current

\begin{aligned}\left( \frac{\partial {{x'}^\alpha}}{\partial {{x'}^\beta}} \right) j^\beta = {\delta^\alpha}_\beta j^\beta = j^\alpha,\end{aligned} \hspace{\stretch{1}}(2.12)

or equivalently

\begin{aligned}\boldsymbol{\nabla} x^\alpha = \mathbf{e}_\alpha.\end{aligned} \hspace{\stretch{1}}(2.13)

Carrying out the trickery we have

\begin{aligned}A^\alpha &= \frac{1}{{c{\left\lvert{\mathbf{x}}\right\rvert}}} \int d^3 \mathbf{x}' ( \boldsymbol{\nabla}' {x'}^\alpha ) \cdot \mathbf{J}(\mathbf{x}', t_r) \\ &= \frac{1}{{c{\left\lvert{\mathbf{x}}\right\rvert}}} \int d^3 \mathbf{x}' ( \partial_{\beta'} {x'}^\alpha ) j^\beta (\mathbf{x}', t_r) \\ &= \frac{1}{{c{\left\lvert{\mathbf{x}}\right\rvert}}} \int d^3 \mathbf{x}' ( \partial_{\beta'} ({x'}^\alpha j^\beta(\mathbf{x}', t_r)) - {x'}^\alpha (\underbrace{\boldsymbol{\nabla}' \cdot \mathbf{J}(\mathbf{x}', t_r)}_{=-\partial_0 \rho = 0}) ) \\ &= \frac{1}{{c{\left\lvert{\mathbf{x}}\right\rvert}}} \int d^3 \mathbf{x}' \boldsymbol{\nabla}' \cdot ({x'}^\alpha \mathbf{J}) \\ &= \oint_{{S_R}^2} d^2 \boldsymbol{\sigma} \cdot ({x'}^\alpha \mathbf{J}) \\ &= 0.\end{aligned}

We see that the first order approximation is insufficient to calculate the vector potential for the magnetic dipole system, and that we have

\begin{aligned}A^\alpha = 0 + \text{higher moments}\end{aligned} \hspace{\stretch{1}}(2.14)

Looking back to what we’d done in class, we’d also dropped this term of the vector potential, using the same arguments. What we had left was

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert} }} \dot{\mathbf{d}}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)= \frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' {x'}^\alpha \frac{\partial {}}{\partial {t}}\rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right),\end{aligned} \hspace{\stretch{1}}(2.15)

but that additional term is also zero in this magnetic dipole system since we have no static charge distribution.

There are two options to resolve this

\begin{enumerate}
\item calculate \mathbf{A} using higher order moments \lambda \gg b. Go to next order in b/\lambda.

This is complicated!

\item Use EM dualities (the slick way!)
\end{enumerate}

Recall that Maxwell’s equations are

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 4 \pi \rho \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} &= -\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} + 4 \pi \mathbf{J}\end{aligned} \hspace{\stretch{1}}(2.16)

If j^i = 0, then taking \mathbf{E} \rightarrow \mathbf{B} and \mathbf{B} \rightarrow \mathbf{E} we get the same equations.

Introduce dual charges \rho_m and \mathbf{J}_m

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 4 \pi \rho_e \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 4 \pi \rho_m \\ \boldsymbol{\nabla} \times \mathbf{E} &= -\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} + 4 \pi \mathbf{J}_m\\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} + 4 \pi \mathbf{J}_e\end{aligned} \hspace{\stretch{1}}(2.20)

Duality \mathbf{E} \rightarrow \mathbf{B} provided \rho_e \rightarrow \rho_m and \mathbf{J}_e \rightarrow \mathbf{J}_m, or

\begin{aligned}F^{i j} &\rightarrow \tilde{F}^{i j} = \epsilon^{i j k l} F_{k l} \\ j^{k} &\rightarrow \tilde{j}^k\end{aligned} \hspace{\stretch{1}}(2.24)

With radiation : the duality transformation takes the electric dipole moment to the magnetic dipole moment \mathbf{d} \rightarrow \mathbf{m}.

\begin{aligned}\mathbf{B} &= -\frac{1}{{c^2 {\left\lvert{\mathbf{x}}\right\rvert}}} (\dot{d}{\mathbf{m}} \times \hat{\mathbf{r}}) \times \hat{\mathbf{r}} \\ \mathbf{E} &= \hat{\mathbf{r}} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.26)

with

\begin{aligned}\text{Power} \sim \left\langle{{{\left\lvert{\dot{d}{\mathbf{m}}^2}\right\rvert}}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.28)

\begin{aligned}\left\langle{{{\left\lvert{\dot{d}{\mathbf{m}}^2}\right\rvert}}}\right\rangle= \frac{1}{{2}} (I_o \pi b^2 \omega^2)^2\end{aligned} \hspace{\stretch{1}}(2.29)

where

\begin{aligned}I_o = \dot{q} = \omega q\end{aligned} \hspace{\stretch{1}}(2.30)

So the power of the magnetic dipole is

\begin{aligned}P_m(R) = \frac{b^4 q^2 \pi^2 \omega^6}{3 c^5}\end{aligned} \hspace{\stretch{1}}(2.31)

Taking ratios of the magnetic and electric power we find

\begin{aligned}\frac{P_m}{E_m} &= \frac{b^4 q^2 \pi^2 \omega^6}{b^2 q^2 \omega^4 c^2}  \\ &\sim \frac{b^2 \omega^2}{c^2} \\ &= \left(\frac{b \omega}{c}\right)^2 \\ &= \left(\frac{b }{\lambda}\right)^2 \end{aligned}

This difference in power shows the second order moment dependence, in the \lambda \gg b approximations.

FIXME: go back and review the “third year” content and see where the magnetic dipole moment came from. That’s the key to this argument, since we need to see how this ends up equivalent to a pair of charges in the electric field case.

Midterm solution discussion.

In the last part of the tutorial, the bonus question from the tutorial was covered. This was to determine the Yukawa potential from the differential equation that we found in the earlier part of the problem.

I took a couple notes about this on paper, but don’t intend to write them up. Everything proceeded exactly as I would have expected them to for solving the problem (I barely finished the midterm as is, so I didn’t have a chance to try it). Take Fourier transforms and then evaluate the inverse Fourier integral. This is exactly what we can do for the Coulomb potential, but actually easier since we don’t have to introduce anything to offset the poles (and we recover the Coulomb potential in the M \rightarrow 0 case).

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 20 (Taught by Prof. Erich Poppitz). Potentials at a distance from a localized current distribution. EM fields due to dipole radiation.

Posted by peeterjoot on March 23, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 147-165: EM fields of a moving source (147-148+HW5); a particle at rest (148); a constant velocity particle (149-152); behavior of EM fields “at infinity” for a general-worldline source and radiation (152-153) [Tuesday, Mar. 15]; radiated power (154); fields in the “wave zone” and discussions of approximations made (155-159); EM fields due to electric dipole radiation (160-163); Poynting vector, angular distribution, and power of dipole radiation (164-165) [Wednesday, Mar. 16…]

Multipole expansion of the fields.

\begin{aligned}A^i(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' j^i\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }{c}\right) \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }}\end{aligned} \hspace{\stretch{1}}(2.1)

This integral is over the region of space where the sources j^i are non-vanishing, but this region is limited. The value {\left\lvert{\mathbf{x}'}\right\rvert} \le l, so we can expand the denominator in multipole expansion

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }}&=\frac{1}{{\sqrt{(\mathbf{x} - \mathbf{x}')^2} }} \\ &=\frac{1}{{\sqrt{\mathbf{x}^2 + {\mathbf{x}'}^2 - 2 \mathbf{x} \cdot \mathbf{x}'} }} \\ &=\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} \frac{1}{{\sqrt{1 + \frac{{\mathbf{x}'}^2}{\mathbf{x}^2} - 2 \frac{\hat{\mathbf{x}}}{{\left\lvert{\mathbf{x}}\right\rvert} } \cdot \mathbf{x}'} }} \\ &\approx\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} \frac{1}{{\sqrt{1 - 2 \frac{\hat{\mathbf{x}}}{{\left\lvert{\mathbf{x}}\right\rvert} } \cdot \mathbf{x}'} }} \\ &\approx\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} \left(1 + \frac{\hat{\mathbf{x}}}{{\left\lvert{\mathbf{x}}\right\rvert} } \cdot \mathbf{x}' \right).\end{aligned}

Neglecting all but the first order term in the expansion we have

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }}\approx \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} + \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \cdot \mathbf{x}' .\end{aligned} \hspace{\stretch{1}}(2.2)

Similarly, for the retarded time we have

\begin{aligned}t - \frac{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }{c} &\approx t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c} \left( 1 - \frac{\mathbf{x} \cdot \mathbf{x}'}{{\left\lvert{\mathbf{x}}\right\rvert}^2} \right) \\ &= t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c} + \frac{\mathbf{x} \cdot \mathbf{x}'}{c {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned}

We can now do a first order Taylor expansion of the current j^i about the retarded time

\begin{aligned}j^i\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c} + \frac{\mathbf{x} \cdot \mathbf{x}'}{c {\left\lvert{\mathbf{x}}\right\rvert} } + \cdots \right)\approxj^i\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) + \frac{\partial {j^i}}{\partial {t}} \left(\mathbf{x}, t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \frac{\mathbf{x} \cdot \mathbf{x}'}{c {\left\lvert{\mathbf{x}}\right\rvert} }.\end{aligned} \hspace{\stretch{1}}(2.3)

To elucidate the physics, imagine that time dependence of the source is periodic with angular frequency \omega_0. For example:

\begin{aligned}j^i = A(\mathbf{x}) e^{-i \omega t}.\end{aligned} \hspace{\stretch{1}}(2.4)

Here we have

\begin{aligned}\frac{\partial {j^i}}{\partial {t}} = -i \omega_0 j^i.\end{aligned} \hspace{\stretch{1}}(2.5)

So, for the magnitude of the second term we have

\begin{aligned}{\left\lvert{\frac{\partial {j^i}}{\partial {t}} \frac{\mathbf{x} \cdot \mathbf{x}'}{c \left\lvert \mathbf{x} \right\rvert }}\right\rvert} = \omega_0 {\left\lvert{j^i \frac{\mathbf{x} \cdot \mathbf{x}'}{c \left\lvert \mathbf{x} \right\rvert }}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(2.6)

Requiring second term much less than the first term means

\begin{aligned}{\left\lvert{\omega_0 \frac{\mathbf{x} \cdot \mathbf{x}'}{c \left\lvert \mathbf{x} \right\rvert }}\right\rvert} \ll 1.\end{aligned} \hspace{\stretch{1}}(2.7)

But recall

\begin{aligned}{\left\lvert{\frac{\mathbf{x} \cdot \mathbf{x}'}{c \left\lvert \mathbf{x} \right\rvert }}\right\rvert} \le l,\end{aligned} \hspace{\stretch{1}}(2.8)

so for our Taylor expansion to be valid we have the following constraints on the angular velocity and the position vectors for our charge and measurement position

\begin{aligned}{\left\lvert{\omega_0 \frac{\mathbf{x} \cdot \mathbf{x}'}{c \left\lvert \mathbf{x} \right\rvert }}\right\rvert} \le \frac{\omega_0 l}{c} \ll 1.\end{aligned} \hspace{\stretch{1}}(2.9)

This is a physical requirement size of the wavelength of the emitter (if the wavelength doesn’t meet this requirement, this expansion does not work). The connection to the wavelength can be observed by noting that we have

\begin{aligned}\frac{\omega_0}{c} &= k \\ 2 \pi k &= \frac{1}{{\lambda}} \\ \implies \frac{\omega_0}{c} &\sim \frac{1}{{\lambda}} \\ \end{aligned}

Putting the pieces together. Potentials at a distance.

\paragraph{Moral:} We’ll utilize two expansions (we need two small parameters)

\begin{enumerate}
\item {\left\lvert{\mathbf{x}}\right\rvert} \gg l
\item \lambda \gg l
\end{enumerate}

Plugging into our current

\begin{aligned}A^i(\mathbf{x}, t) \approx \frac{1}{{c}} \int d^3 \mathbf{x}' \left( j^i\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) + \frac{\partial {j^i}}{\partial {t}} \left(\mathbf{x}, t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \frac{\mathbf{x} \cdot \mathbf{x}'}{c {\left\lvert{\mathbf{x}}\right\rvert} } \right)\left( \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} + \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \cdot \mathbf{x}' \right)\end{aligned} \hspace{\stretch{1}}(3.10)

\begin{aligned}A^0(\mathbf{x}, t) \approx \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' \rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)+\frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \cdot \int d^3 \mathbf{x}' \mathbf{x}' \rho \left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)+\frac{\mathbf{x}}{c {\left\lvert{\mathbf{x}}\right\rvert}^2} \cdot \int d^3 \mathbf{x}' \mathbf{x}' \frac{\partial {\rho}}{\partial {t}}\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)\end{aligned} \hspace{\stretch{1}}(3.11)

The first term is the total charge evaluated at the retarded time. In the second term (and in the third, where it’s derivative is taken) we have

\begin{aligned}\int d^3 \mathbf{x}' \mathbf{x}' \rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) = \mathbf{d}(t_r),\end{aligned} \hspace{\stretch{1}}(3.12)

which is the dipole moment evaluated at the retarded time t_r = t - {\left\lvert{\mathbf{x}}\right\rvert}/c. In the last term we can pull out the time derivative (because we are integrating over \mathbf{x}')

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^2}} \mathbf{x} \cdot \int d^3 \mathbf{x}' \mathbf{x}' \frac{\partial {}}{\partial {t}} \rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)&=\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^2}} \mathbf{x} \cdot \frac{\partial {}}{\partial {t}} \int d^3 \mathbf{x}' \mathbf{x}' \rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \\ &=\frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^2}} \mathbf{x} \cdot \frac{\partial {}}{\partial {t}}\mathbf{d} \left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)\end{aligned}

For the spatial components of the current lets just keep the first term

\begin{aligned}A^\alpha(\mathbf{x}, t) &\approx\frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' j^\alpha\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \\ &=\frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' (\boldsymbol{\nabla}_{\mathbf{x}'} x^\alpha) \cdot \mathbf{j}\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)  \\ &=\frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' \left(\boldsymbol{\nabla} \cdot \left( {x'}^\alpha \mathbf{j} \left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \right)- {x'}^\alpha \boldsymbol{\nabla}_{\mathbf{x}'} \cdot \mathbf{j}\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \right) \\ &=\frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \oint_{S^2_\infty} d^2 \boldsymbol{\sigma} \cdot {x'}^\alpha \mathbf{j}\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)+\frac{1}{{ c {\left\lvert{\mathbf{x}}\right\rvert} }} \int d^3 \mathbf{x}' {x'}^\alpha \frac{\partial {}}{\partial {t}}\rho\left(\mathbf{x}', t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)\end{aligned}

There’s two tricks used here. One was writing the unit vector \mathbf{e}_\alpha = \boldsymbol{\nabla} x^\alpha. The other was use of the continuity equation {\partial {\rho}}/{\partial {t}} + \boldsymbol{\nabla} \mathbf{j} = 0. This first trick was mentioned as one of the few tricks of physics that will often be repeated since there aren’t many good ones.

With the first term vanishing on the boundary (since j^i is localized), and pulling the time derivatives out of the integral, we can summarize the dipole potentials as

\begin{aligned}\boxed{\begin{aligned}A^0(\mathbf{x}, t) &= \frac{Q\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)}{{\left\lvert{\mathbf{x}}\right\rvert} } + \frac{\mathbf{x} \cdot \mathbf{d}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)}{{\left\lvert{\mathbf{x}}\right\rvert}^3} + \frac{\mathbf{x} \cdot \dot{\mathbf{d}}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)}{c {\left\lvert{\mathbf{x}}\right\rvert}^2} \\ \mathbf{A}(\mathbf{x}, t) &= \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert} }} \dot{\mathbf{d}}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right).\end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.13)

Example: Electric dipole radiation

PICTURE: two closely separated oppositely charges, wiggling along the line connecting them (on the z-axis). -q at rest, while +q oscillates.

\begin{aligned}z_+(t) = z_0 + a \sin\omega t.\end{aligned} \hspace{\stretch{1}}(4.14)

Since we’ve put the -q charge at the origin, it has no contribution to the dipole moment, and we have

\begin{aligned}\mathbf{d}(t) = \mathbf{e}_z q (z_0 + a \sin\omega t).\end{aligned} \hspace{\stretch{1}}(4.15)

Thus

\begin{aligned}A^0(\mathbf{x}, t) &= \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^3}} \mathbf{x} \cdot \mathbf{d}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) + \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert}^2}} \mathbf{x} \cdot \dot{\mathbf{d}}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right) \\ \mathbf{A}(\mathbf{x}, t) &= \frac{\dot{\mathbf{d}}\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert} }{c}\right)}{c {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned} \hspace{\stretch{1}}(4.16)

so with t_r = t - {\left\lvert{\mathbf{x}}\right\rvert}/c, and z = \mathbf{x} \cdot \mathbf{e}_z in the dipole dot product, we have

\begin{aligned}A^0(\mathbf{x}, t) &= \frac{z q}{{\left\lvert{\mathbf{x}}\right\rvert}^3} ( z_0 + a \sin(\omega t_r) ) + \frac{z q}{c {\left\lvert{\mathbf{x}}\right\rvert}^2} a \omega \cos(\omega t_r) \\ \mathbf{A}(\mathbf{x}, t) &= \frac{1}{{c {\left\lvert{\mathbf{x}}\right\rvert} }} \mathbf{e}_z q a \omega \cos(\omega t_r)\end{aligned} \hspace{\stretch{1}}(4.18)

These hold provided {\left\lvert{\mathbf{x}}\right\rvert} \gg (z_0, a) and \omega l/c \ll 1. Recall that \omega \lambda = c/2\pi, which has dimensions of velocity.

FIXME: think through and justify \omega l = v.

Observe that \omega l \sim v so this is a requirement that our charged positive particle is moving with {\left\lvert{\mathbf{v}}\right\rvert}/c \ll 1.

Now we’ll take derivatives. The first term of the scalar potential will be ignored since the 1/{\left\lvert{\mathbf{x}}\right\rvert}^2 is non-radiative.

\begin{aligned}\mathbf{E} &= -\boldsymbol{\nabla} A^0 - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ &= -\frac{z a \omega q}{{\left\lvert{\mathbf{x}}\right\rvert}^2 c} (-\omega \sin(\omega t_r)) \left( - \frac{1}{{c}} \boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} \right)- \frac{1}{{c^2 {\left\lvert{\mathbf{x}}\right\rvert} }} \mathbf{e}_z q a \omega^2 (-\sin(\omega t_r)).\end{aligned}

We’ve used \boldsymbol{\nabla} t_r = -\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}/c, and \boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} = \hat{\mathbf{x}}, and \partial_t t_r = 1.

\begin{aligned}\mathbf{E} = \frac{ q a \omega^2 }{c^2 {\left\lvert{\mathbf{x}}\right\rvert} } \sin(\omega t_r) \left( \mathbf{e}_z - \frac{z}{{\left\lvert{\mathbf{x}}\right\rvert} } \hat{\mathbf{x}} \right)\end{aligned} \hspace{\stretch{1}}(4.20)

So,

\begin{aligned}{\left\lvert{\mathbf{S}}\right\rvert} \sim \omega^4\end{aligned} \hspace{\stretch{1}}(4.21)

The power is proportional to \omega^4. Higher frequency radiation has more power : this is why the sky is blue! It all comes from the fact that the electric field is proportional to the squared acceleration (\sim \omega^2).

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

PHY450H1S, Relativistic Electrodynamics, Problem Set 4.

Posted by peeterjoot on March 17, 2011

[Click here for a PDF of this post with nicer formatting]

Problem 1. Energy, momentum, etc., of EM waves.

Statement

\begin{enumerate}
\item Calculate the energy density, energy flux, and momentum density of a plane monochromatic linearly polarized electromagnetic wave.
\item Calculate the values of these quantities averaged over a period.
\item Imagine that a plane monochromatic linearly polarized wave incident on a surface (let the angle between the wave vector and the normal to the surface be \theta) is completely reflected. Find the pressure that the EM wave exerts on the surface.
\item To plug in some numbers, note that the intensity of sunlight hitting the Earth is about 1300 W/m^2 ( the intensity is the average power per unit area transported by the wave). If sunlight strikes a perfect absorber, what is the pressure exerted? What if it strikes a perfect reflector? What fraction of the atmospheric pressure does this amount to?
\end{enumerate}

Solution

Part 1. Energy and momentum density.

Because it doesn’t add too much complexity, I’m going to calculate these using the more general elliptically polarized wave solutions. Our vector potential (in the Coulomb gauge \phi = 0, \boldsymbol{\nabla} \cdot \mathbf{A} = 0) has the form

\begin{aligned}\mathbf{A} = \text{Real} \boldsymbol{\beta} e^{i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.1)

The elliptical polarization case only differs from the linear by allowing \boldsymbol{\beta} to be complex, rather than purely real or purely imaginary. Observe that the Coulomb gauge condition \boldsymbol{\nabla} \cdot \mathbf{A} implies

\begin{aligned}\boldsymbol{\beta} \cdot \mathbf{k} = 0,\end{aligned} \hspace{\stretch{1}}(1.2)

a fact that will kill of terms in a number of places in the following manipulations.

Also observe that for this to be a solution to the wave equation operator

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta,\end{aligned} \hspace{\stretch{1}}(1.3)

the frequency and wave vector must be related by the condition

\begin{aligned}\frac{\omega}{c} = {\left\lvert{\mathbf{k}}\right\rvert} = k.\end{aligned} \hspace{\stretch{1}}(1.4)

For the time and spatial phase let’s write

\begin{aligned}\theta = \omega t - \mathbf{k} \cdot \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.5)

In the Coulomb gauge, our electric and magnetic fields are

\begin{aligned}\mathbf{E} &= -\frac{1}{{c}}\frac{\partial {\mathbf{A}}}{\partial {t}} = \text{Real} \frac{-i\omega}{c} \boldsymbol{\beta} e^{i\theta} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A} = \text{Real} i \boldsymbol{\beta} \times \mathbf{k} e^{i\theta}\end{aligned} \hspace{\stretch{1}}(1.6)

Similar to \S 48 of the text [1], let’s split \boldsymbol{\beta} into a phase and perpendicular vector components so that

\begin{aligned}\boldsymbol{\beta} = \mathbf{b} e^{-i\alpha}\end{aligned} \hspace{\stretch{1}}(1.8)

where \mathbf{b} has a real square

\begin{aligned}\mathbf{b}^2 = {\left\lvert{\boldsymbol{\beta}}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(1.9)

This allows a split into two perpendicular real vectors

\begin{aligned}\mathbf{b} = \mathbf{b}_1 + i \mathbf{b}_2,\end{aligned} \hspace{\stretch{1}}(1.10)

where \mathbf{b}_1 \cdot \mathbf{b}_2 = 0 since \mathbf{b}^2 = \mathbf{b}_1^2 - \mathbf{b}_2^2 + 2 \mathbf{b}_1 \cdot \mathbf{b}_2 is real.

Our electric and magnetic fields are now reduced to

\begin{aligned}\mathbf{E} &= \text{Real} \left( \frac{-i\omega}{c} \mathbf{b} e^{i(\theta - \alpha)} \right) \\ \mathbf{B} &= \text{Real} \left( i \mathbf{b} \times \mathbf{k} e^{i(\theta - \alpha)} \right) \end{aligned} \hspace{\stretch{1}}(1.11)

or explicitly in terms of \mathbf{b}_1 and \mathbf{b}_2

\begin{aligned}\mathbf{E} &= \frac{\omega}{c} ( \mathbf{b}_1 \sin(\theta-\alpha) + \mathbf{b}_2 \cos(\theta-\alpha)) \\ \mathbf{B} &= ( \mathbf{k} \times \mathbf{b}_1 ) \sin(\theta-\alpha) + (\mathbf{k} \times \mathbf{b}_2) \cos(\theta-\alpha) \end{aligned} \hspace{\stretch{1}}(1.13)

The special case of interest for this problem, since it only strictly asked for linear polarization, is where \alpha = 0 and one of \mathbf{b}_1 or \mathbf{b}_2 is zero (i.e. \boldsymbol{\beta} is strictly real or strictly imaginary). The case with \Beta strictly real, as done in class, is

\begin{aligned}\mathbf{E} &= \frac{\omega}{c} \mathbf{b}_1 \sin(\theta-\alpha) \\ \mathbf{B} &= ( \mathbf{k} \times \mathbf{b}_1 ) \sin(\theta-\alpha) \end{aligned} \hspace{\stretch{1}}(1.15)

Now lets calculate the energy density and Poynting vectors. We’ll need a few intermediate results.

\begin{aligned}(\text{Real} \mathbf{d} e^{i\phi})^2 &= \frac{1}{{4}} ( \mathbf{d} e^{i\phi} + \mathbf{d}^{*} e^{-i\phi})^2 \\ &= \frac{1}{{4}} ( \mathbf{d}^2 e^{2 i \phi} + (\mathbf{d}^{*})^2 e^{-2 i \phi} + 2 {\left\lvert{\mathbf{d}}\right\rvert}^2 ) \\ &= \frac{1}{{2}} \left( {\left\lvert{\mathbf{d}}\right\rvert}^2 + \text{Real} ( \mathbf{d} e^{i \phi} )^2 \right),\end{aligned}

and

\begin{aligned}(\text{Real} \mathbf{d} e^{i\phi}) \times (\text{Real} \mathbf{e} e^{i\phi}) &= \frac{1}{{4}} ( \mathbf{d} e^{i\phi} + \mathbf{d}^{*} e^{-i\phi}) \times ( \mathbf{e} e^{i\phi} + \mathbf{e}^{*} e^{-i\phi}) \\ &= \frac{1}{{2}} \text{Real} \left( \mathbf{d} \times \mathbf{e}^{*} + (\mathbf{d} \times \mathbf{e}) e^{2 i \phi} \right).\end{aligned}

Let’s use arrowed vectors for the phasor parts

\begin{aligned}\vec{E} &= \frac{-i\omega}{c} \mathbf{b} e^{i(\theta - \alpha)} \\ \vec{B} &= i \mathbf{b} \times \mathbf{k} e^{i(\theta - \alpha)},\end{aligned} \hspace{\stretch{1}}(1.17)

where we can recover our vector quantities by taking real parts \mathbf{E} = \text{Real} \vec{E}, \mathbf{B} = \text{Real} \vec{B}. Our energy density in terms of these phasors is then

\begin{aligned}\mathcal{E} = \frac{1}{{8\pi}} (\mathbf{E}^2 + \mathbf{B}^2)= \frac{1}{{16\pi}} \left( {\left\lvert{\vec{E}}\right\rvert}^2 + {\left\lvert{\vec{B}}\right\rvert}^2 + \text{Real} ({\vec{E}}^2 + {\vec{B}}^2) \right).\end{aligned} \hspace{\stretch{1}}(1.19)

This is

\begin{aligned}\mathcal{E} &=\frac{1}{{16\pi}}\left(\frac{\omega^2}{c^2} {\left\lvert{\mathbf{b}}\right\rvert}^2 + {\left\lvert{\mathbf{b} \times \mathbf{k}}\right\rvert}^2-\text{Real} \left(\frac{\omega^2}{c^2} \mathbf{b}^2 + (\mathbf{b} \times \mathbf{k})^2\right)e^{2 i(\theta - \alpha)} \right).\end{aligned}

Note that \omega^2/c^2 = \mathbf{k}^2, and {\left\lvert{\mathbf{b} \times \mathbf{k}}\right\rvert} = {\left\lvert{\mathbf{b}}\right\rvert}^2 \mathbf{k}^2 (since \mathbf{b} \cdot \mathbf{k} = 0). Also (\mathbf{b} \times \mathbf{k})^2 = \mathbf{b}^2 \mathbf{k}^2, so we have

\begin{aligned}\boxed{\mathcal{E} =\frac{ \mathbf{k}^2 }{8\pi}\left({\left\lvert{\mathbf{b}}\right\rvert}^2 -\text{Real} \mathbf{b}^2 e^{2 i(\theta - \alpha)} \right).}\end{aligned} \hspace{\stretch{1}}(1.20)

Now, for the Poynting vector. We have

\begin{aligned}S = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} = \frac{c}{8 \pi} \text{Real} \left( \vec{E} \times \vec{B}^{*} + \vec{E} \times \vec{B} \right).\end{aligned} \hspace{\stretch{1}}(1.21)

This is

\begin{aligned}S &= \frac{c}{8 \pi} \text{Real} \left( -k \mathbf{b} \times (\mathbf{b}^{*} \times \mathbf{k}) + k \mathbf{b} \times (\mathbf{b} \times \mathbf{k} ) e^{2 i(\theta - \alpha)} \right) \\ \end{aligned}

Reducing the terms we get \mathbf{b} \times (\mathbf{b}^{*} \times \mathbf{k}) = -\mathbf{k} {\left\lvert{\mathbf{b}}\right\rvert}^2, and \mathbf{b} \times (\mathbf{b} \times \mathbf{k}) = -\mathbf{k} \mathbf{b}^2, leaving

\begin{aligned}\boxed{S = \frac{c \hat{\mathbf{k}} \mathbf{k}^2 }{8 \pi} \left( {\left\lvert{\mathbf{b}}\right\rvert}^2 - \text{Real} \mathbf{b}^2 e^{2 i(\theta - \alpha)} \right) = c \hat{\mathbf{k}} \mathcal{E}}\end{aligned} \hspace{\stretch{1}}(1.22)

Now, the text in \S 47 defines the energy flux as the Poynting vector, and the momentum density as \mathbf{S}/c^2, so we just divide 1.22 by c^2 for the momentum density and we are done. For the linearly polarized case (all that was actually asked for, but less cool to calculate), where \mathbf{b} is real, we have

\begin{aligned}\mbox{Energy density} &= \mathcal{E} = \frac{ \mathbf{k}^2 \mathbf{b}^2 }{8\pi} ( 1 - \cos( 2 (\omega t - \mathbf{k} \cdot \mathbf{x})) ) \\ \mbox{Energy flux} &= \mathbf{S} = c \hat{\mathbf{k}} \mathcal{E} \\ \mbox{Momentum density} &= \frac{1}{{c^2}} \mathbf{S} = \frac{\hat{\mathbf{k}}}{c} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.23)

Part 2. Averaged.

We want to average over one period, the time T such that \omega T = 2 \pi, so the average is

\begin{aligned}\left\langle{{f}}\right\rangle = \frac{\omega}{2\pi} \int_0^{2\pi/\omega} f dt.\end{aligned} \hspace{\stretch{1}}(1.26)

It is clear that this will just kill off the sinusoidal terms, leaving

\begin{aligned}\mbox{Average Energy density} &= \left\langle{{\mathcal{E}}}\right\rangle = \frac{ \mathbf{k}^2 {\left\lvert{\mathbf{b}}\right\rvert}^2 }{8\pi} \\ \mbox{Average Energy flux} &= \left\langle{\mathbf{S}}\right\rangle = c \hat{\mathbf{k}} \mathcal{E} \\ \mbox{Average Momentum density} &= \frac{1}{{c^2}} \left\langle{\mathbf{S}}\right\rangle = \frac{\hat{\mathbf{k}}}{c} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.27)

Part 3. Pressure.

The magnitude of the momentum of light is related to its energy by

\begin{aligned}\mathbf{p} = \frac{\mathcal{E}}{c}\end{aligned} \hspace{\stretch{1}}(1.30)

and can thus loosely identify the magnitude of the force as

\begin{aligned}\frac{d{\mathbf{p}}}{dt} &= \frac{1}{{c}} \frac{\partial {}}{\partial {t}} \int \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} d^3 \mathbf{x} \\ &= \int d^2 \boldsymbol{\sigma} \cdot \frac{\mathbf{S}}{c}.\end{aligned}

With pressure as the force per area, we could identify

\begin{aligned}\frac{\mathbf{S}}{c}\end{aligned} \hspace{\stretch{1}}(1.31)

as the instantaneous (directed) pressure on a surface. What is that for linearly polarized light? We have from above for the linear polarized case (where {\left\lvert{\mathbf{b}}\right\rvert}^2 = \mathbf{b}^2)

\begin{aligned}\mathbf{S} = \frac{c \hat{\mathbf{k}} \mathbf{k}^2 \mathbf{b}^2 }{8 \pi} ( 1 - \cos( 2 (\omega t - \mathbf{k} \cdot \mathbf{x}) ) )\end{aligned} \hspace{\stretch{1}}(1.32)

If we look at the magnitude of the average pressure from the radiation, we have

\begin{aligned}{\left\lvert{\frac{\left\langle{\mathbf{S}}\right\rangle}{c}}\right\rvert} = \frac{\mathbf{k}^2 \mathbf{b}^2 }{8 \pi}.\end{aligned} \hspace{\stretch{1}}(1.33)

Part 4. Sunlight.

With atmospheric pressure at 101.3 k Pa, and the pressure from the light at 1300 W/ 3 x 10^8 m/s, we have roughly 4 x 10^-5 Pa of pressure from the sunlight being only \sim 10^-{10} of the total atmospheric pressure. Wow. Very tiny!

Would it make any difference if the surface is a perfect absorber or a reflector? Consider a ball hitting a wall. If it manages to embed itself in the wall, the wall will have to move a bit to conserve momentum. However, if the ball bounces off twice the momentum has been transferred to the wall. The numbers above would be for perfect absorbtion, so double them for a perfect reflector.

Problem 2. Spherical EM waves.

Statement

Suppose you are given:

\begin{aligned}\vec{E}(r, \theta, \phi, t) = A \frac{\sin\theta}{r} \left( \cos(k r - \omega t) - \frac{1}{{k r}} \sin(k r - \omega t) \right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.34)

where \omega = k/c and \hat{\boldsymbol{\phi}} is the unit vector in the \phi-direction. This is a simple example of a spherical wave.

\begin{enumerate}
\item Show that \vec{E} obeys all four Maxwell equations in vacuum and find the associated magnetic field.
\item Calculate the Poynting vector. Average \vec{S} over a full cycle to get the intensity vector \vec{I} \equiv \left\langle{{\vec{S}}}\right\rangle. Where does it point to? How does it depend on r?
\item Integrate the intensity vector flux through a spherical surface centered at the origin to find the total power radiated.
\end{enumerate}

Solution

Part 1. Maxwell equation verification and magnetic field.

Our vacuum Maxwell equations to verify are

\begin{aligned}\nabla \cdot \vec{E} &= 0 \\ \nabla \times \vec{B} -\frac{1}{{c}} \frac{\partial {\vec{E}}}{\partial {t}} &= 0 \\ \nabla \cdot \vec{B} &= 0 \\ \nabla \times \vec{E} +\frac{1}{{c}} \frac{\partial {\vec{B}}}{\partial {t}} &= 0.\end{aligned} \hspace{\stretch{1}}(2.35)

We’ll also need the spherical polar forms of the divergence and curl operators, as found in \S 1.4 of [2]

\begin{aligned}\nabla \cdot \vec{v} &=\frac{1}{{r^2}} \partial_r ( r^2 v_r )+ \frac{1}{{r\sin\theta}} \partial_\theta (\sin\theta v_\theta)+ \frac{1}{{r\sin\theta}} \partial_\phi v_\phi \\ \nabla \times \vec{v} &=\frac{1}{{r \sin\theta}} \left(\partial_\theta (\sin\theta v_\phi) - \partial_\phi v_\theta\right) \hat{\mathbf{r}}+\frac{1}{{r }} \left(\frac{1}{{\sin\theta}} \partial_\phi v_r - \partial_r (r v_\phi)\right) \hat{\boldsymbol{\theta}}+\frac{1}{{r }} \left(\partial_r (r v_\theta) - \partial_\theta v_r\right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.39)

We can start by verifying the divergence equation for the electric field. Observe that our electric field has only an E_\phi component, so our divergence is

\begin{aligned}\nabla \cdot \vec{E}=\frac{1}{{r\sin\theta}} \partial_\phi \left(A \frac{\sin\theta}{r} \left( \cos(k r - \omega t) - \frac{1}{{k r}} \sin(k r - \omega t) \right) \right) = 0.\end{aligned} \hspace{\stretch{1}}(2.41)

We have a zero divergence since the component E_\phi has no \phi dependence (whereas \vec{E} itself does since the unit vector \hat{\boldsymbol{\phi}} = \hat{\boldsymbol{\phi}}(\phi)).

All of the rest of Maxwell’s equations require \vec{B} so we’ll have to first calculate that before progressing further.

A aside on approaches attempted to find \vec{B}

I tried two approaches without success to calculate \vec{B}. First I hoped that I could just integrate -\vec{E} to obtain \vec{A} and then take the curl. Doing so gave me a result that had \nabla \times \vec{B} \ne 0. I hunted for an algebraic error that would account for this, but could not find one.

The second approach that I tried, also without success, was to simply take the cross product \hat{\mathbf{r}} \times \vec{E}. This worked in the monochromatic plane wave case where we had

\begin{aligned}\vec{B} &= (\vec{k} \times \vec{\beta}) \sin(\omega t - \vec{k} \cdot \vec{x}) \\ \vec{E} &= \vec{\beta} {\left\lvert{\vec{k}}\right\rvert} \sin(\omega t - \vec{k} \cdot \vec{x})\end{aligned} \hspace{\stretch{1}}(2.42)

since one can easily show that \vec{B} = \vec{k} \times \vec{E}. Again, I ended up with a result for \vec{B} that did not have a zero divergence.

Finding \vec{B} with a more systematic approach.

Following [3] \S 16.2, let’s try a phasor approach, assuming that all the solutions, whatever they are, have all the time dependence in a e^{-i\omega t} term.

Let’s write our fields as

\begin{aligned}\vec{E} &= \text{Real} (\mathbf{E} e^{-i \omega t}) \\ \vec{B} &= \text{Real} (\mathbf{B} e^{-i \omega t}).\end{aligned} \hspace{\stretch{1}}(2.44)

Substitution back into Maxwell’s equations thus requires equality in the real parts of

\begin{aligned}\nabla \cdot \mathbf{E} &= 0 \\ \nabla \cdot \mathbf{B} &= 0 \\ \nabla \times \mathbf{B} &= - i \frac{\omega}{c} \mathbf{E} \\ \nabla \times \mathbf{E} &= i \frac{\omega}{c} \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.46)

With k = \omega/c we can now directly compute the magnetic field phasor

\begin{aligned}\mathbf{B} = -\frac{i}{k} \nabla \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(2.50)

The electric field of this problem can be put into phasor form by noting

\begin{aligned}\vec{E} = A \frac{\sin\theta}{r} \text{Real} \left( e^{i (k r - \omega t)} - \frac{i}{k r} e^{i(k r - \omega t)} \right) \hat{\boldsymbol{\phi}},\end{aligned} \hspace{\stretch{1}}(2.51)

which allows for reading off the phasor part directly

\begin{aligned}\mathbf{E} = A \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.52)

Now we can compute the magnetic field phasor \mathbf{B}. Since we have only a \phi component in our field, the curl will have just \hat{\mathbf{r}} and \hat{\boldsymbol{\theta}} components. This is reasonable since we expect it to be perpendicular to \mathbf{E}.

\begin{aligned}\nabla \times (v_\phi \hat{\boldsymbol{\phi}}) = \frac{1}{{r \sin\theta}} \partial_\theta (\sin\theta v_\phi) \hat{\mathbf{r}}- \frac{1}{{r }} \partial_r (r v_\phi) \hat{\boldsymbol{\theta}}.\end{aligned} \hspace{\stretch{1}}(2.53)

Chugging through all the algebra we have

\begin{aligned}i k \mathbf{B} &=\nabla \times \mathbf{E} \\ &=\frac{2 A \cos\theta}{r^2} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r } \frac{\partial {}}{\partial {r}} \left( \left( 1 - \frac{i}{k r} \right) e^{i k r} \right)\hat{\boldsymbol{\theta}} \\ &=\frac{2 A \cos\theta}{r^2} \left( 1 - \frac{i}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r } \left( i k + \frac{1}{{r}} + \frac{i}{k r^2} \right) e^{i k r} \hat{\boldsymbol{\theta}},\end{aligned}

so our magnetic phasor is

\begin{aligned}\mathbf{B} =\frac{2 A \cos\theta}{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \hat{\mathbf{r}}- \frac{A\sin\theta}{r} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.54)

Multiplying by e^{-i\omega t} and taking real parts gives us the messy magnetic field expression

\begin{aligned}\begin{aligned}\vec{B} &=\frac{A}{r} \frac{2 \cos\theta}{k r} \left( \sin(k r - \omega t)- \frac{1}{k r} \cos(k r - \omega t) \right)\hat{\mathbf{r}} \\ &- \frac{A}{r} \frac{\sin\theta}{k r}\left(\sin(k r - \omega t)+ \frac{k^2 r^2 + 1}{k r}\cos(k r - \omega t)\right)\hat{\boldsymbol{\theta}}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.55)

Since this was constructed directly from \nabla \times \vec{E} +\frac{1}{{c}} {\partial {\vec{B}}}/{\partial {t}} = 0, this implicitly verifies one more of Maxwell’s equations, leaving only \nabla \cdot \vec{B}, and \nabla \times \vec{B} -\frac{1}{{c}} {\partial {\vec{E}}}/{\partial {t}} = 0. Neither of these looks particularly fun to verify, however, we can take a small shortcut and use the phasors to verify without the explicit time dependence.

From 2.54 we have for the divergence

\begin{aligned}\nabla \cdot \mathbf{B} &=\frac{2 A \cos\theta}{k r^2 } \frac{\partial {}}{\partial {r}} \left(\left( -i - \frac{1}{k r} \right) e^{i k r} \right)- \frac{A 2 \cos\theta}{r^2} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r}  \\ &=\frac{2 A \cos\theta}{r^2 } e^{i k r}\left(\frac{1}{{k}}\left( \frac{1}{{k r^2}} + i k \left(-i - \frac{1}{{k r}}\right)\right)-\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \right) \\ &= 0 \qquad \square\end{aligned}

Let’s also verify the last of Maxwell’s equations in phasor form. The time dependence is knocked out, and we want to see that taking the curl of the magnetic phasor returns us (scaled) the electric phasor. That is

\begin{aligned}\nabla \times \mathbf{B} = - i \frac{\omega}{c} \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.56)

With only r and \theta components in the magnetic phasor we have

\begin{aligned}\nabla \times (v_r \hat{\mathbf{r}} + v_\theta \hat{\boldsymbol{\theta}}) =-\frac{1}{{r \sin\theta}} \partial_\phi v_\theta\hat{\mathbf{r}}+\frac{1}{{r }} \frac{1}{{\sin\theta}} \partial_\phi v_r \hat{\boldsymbol{\theta}}+\frac{1}{{r }} \left(\partial_r (r v_\theta) - \partial_\theta v_r\right) \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.57)

Immediately, we see that with no explicit \phi dependence in the coordinates, we have no \hat{\mathbf{r}} nor \hat{\boldsymbol{\theta}} terms in the curl, which is good. Our curl is now just

\begin{aligned}\nabla \times \mathbf{B} &=\frac{1}{{r }} \left( A\sin\theta \partial_r \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} +\frac{2 A \sin\theta}{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta \frac{1}{{r }} \left(\partial_r \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) e^{i k r} +\frac{2 }{k r^2} \left( -i - \frac{1}{k r} \right) e^{i k r} \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta e^{i k r}\frac{1}{{r }} \left((ik)\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) +\left( \frac{i}{k r^2} - \frac{2}{k^2 r^3} \right) +\frac{2 }{k r^2} \left( -i - \frac{1}{k r} \right) \right) \hat{\boldsymbol{\phi}} \\ &=A \sin\theta e^{i k r}\frac{1}{{r }} \left(i k + \frac{1}{{r}} - \frac{ 4 }{k^2 r^3}\right) \hat{\boldsymbol{\phi}} \\ \end{aligned}

What we expect is \nabla \times \mathbf{B} = - i k \mathbf{E} which is

\begin{aligned}- i k \mathbf{E} =A \sin\theta e^{i k r}\frac{1}{{r }} \left(- i k - \frac{1}{{r}}\right) \hat{\boldsymbol{\phi}} \end{aligned} \hspace{\stretch{1}}(2.58)

FIXME: Somewhere I must have made a sign error, because these aren’t matching! Have an extra 1/r^3 term and the wrong sign on the 1/r term.

Part 2. Poynting and intensity.

Our Poynting vector is

\begin{aligned}\vec{S} = \frac{c}{4 \pi} \vec{E} \times \vec{B},\end{aligned} \hspace{\stretch{1}}(2.59)

which we could calculate from 2.34, and 2.55. However, that looks like it’s going to be a mess to multiply out. Let’s use instead the trick from \S 48 of the course text [1], and work with the complex quantities directly, noting that we have

\begin{aligned}(\text{Real} \mathbf{E} e^{i \alpha}) \times (\text{Real} \mathbf{B} e^{i \alpha}) &= \frac{1}{{4}} ( \mathbf{E} e^{i \alpha} + \mathbf{E}^{*} e^{-i \alpha}) \times ( \mathbf{B} e^{i \alpha} + \mathbf{B}^{*} e^{-i \alpha}) \\ &= \frac{1}{{2}} \text{Real} \left( \mathbf{E} \times \mathbf{B}^{*} + (\mathbf{E} \times \mathbf{B}) e^{2 i \alpha} \right).\end{aligned}

Now we can do the Poynting calculation using the simpler relations 2.52, 2.54.

Let’s also write

\begin{aligned}\mathbf{E} &= A e^{i k r} E_\phi \hat{\boldsymbol{\phi}} \\ \mathbf{B} &= A e^{i k r} ( B_r \hat{\mathbf{r}} + B_\theta \hat{\boldsymbol{\theta}} )\end{aligned} \hspace{\stretch{1}}(2.60)

where

\begin{aligned}E_\phi &= \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} \right)  \\ B_r &= -\frac{2 \cos\theta}{k r^2} \left( i + \frac{1}{k r} \right)  \\ B_\theta &= - \frac{\sin\theta}{r} \left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \end{aligned} \hspace{\stretch{1}}(2.62)

So our Poynting vector is

\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \text{Real}\left(E_\phi \hat{\boldsymbol{\phi}} \times ( B_r^{*} \hat{\mathbf{r}} + B_\theta^{*} \hat{\boldsymbol{\theta}} )+E_\phi \hat{\boldsymbol{\phi}} \times ( B_r \hat{\mathbf{r}} + B_\theta \hat{\boldsymbol{\theta}} ) e^{ 2 i ( k r - \omega t ) }\right) \\ \end{aligned}

Note that our unit vector basis \{ \hat{\mathbf{r}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\phi}} \} was rotated from \{ \hat{\mathbf{z}}, \hat{\mathbf{x}}, \hat{\mathbf{y}} \}, so we have

\begin{aligned}\hat{\boldsymbol{\phi}} \times \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \hat{\boldsymbol{\theta}} \times \hat{\boldsymbol{\phi}} &= \hat{\mathbf{r}} \\ \hat{\mathbf{r}} \times \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} ,\end{aligned} \hspace{\stretch{1}}(2.65)

and plug this into our Poynting expression

\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \text{Real}\left(E_\phi B_r^{*} \hat{\boldsymbol{\theta}} -E_\phi B_\theta^{*} \hat{\mathbf{r}} +(E_\phi B_r \hat{\boldsymbol{\theta}} -E_\phi B_\theta \hat{\mathbf{r}} )e^{ 2 i ( k r - \omega t ) }\right) \\ \end{aligned}

Now we have to multiply out our terms. We have

\begin{aligned}E_\phi B_r^{*} &=- \frac{\sin\theta}{r} \frac{2 \cos\theta}{k r^2} \left( 1 - \frac{i}{k r} \right)\left( -i + \frac{1}{k r} \right) \\ &=-\frac{ \sin(2\theta)}{k r^3}\left( -i - \frac{i}{k^2 r^2} \right),\end{aligned}

Since this has no real part, there is no average contribution to \vec{S} in the \hat{\boldsymbol{\theta}} direction. What do we have for the time dependent part

\begin{aligned}E_\phi B_r &=- \frac{\sin\theta}{r} \frac{2 \cos\theta}{k r^2} \left( 1 - \frac{i}{k r} \right)\left( i + \frac{1}{k r} \right) \\ &=-\frac{ \sin(2\theta)}{k r^3}\left( i + \frac{2}{k r} - \frac{i}{k^2 r^2} \right) \end{aligned}

This is non zero, so we have a time dependent \hat{\boldsymbol{\theta}} contribution that averages out. Moving on

\begin{aligned}- E_\phi B_\theta^{*}&= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{i}{k r} \right)\left( 1 + \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \\ &= \frac{\sin^2\theta}{r^2} \left( 1 + \frac{2}{k^2 r^2} - \frac{i}{k^3 r^3}\right).\end{aligned}

This is non-zero, so the steady state Poynting vector is in the outwards radial direction. The last piece is

\begin{aligned}- E_\phi B_\theta&= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{i}{k r} \right)\left( 1 - \frac{i}{k r} + \frac{1}{k^2 r^2} \right) \\ &= \frac{\sin^2\theta}{r^2} \left( 1 - \frac{2i}{k r} - \frac{i}{k^3 r^3}\right).\end{aligned}

Assembling all the results we have

\begin{aligned}\begin{aligned}\vec{S} &= \frac{A^2 c}{2 \pi} \frac{\sin^2\theta}{r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \\ &\quad +\frac{A^2 c}{2 \pi} \text{Real} \left(\left(-\frac{ \sin(2\theta)}{k r^3} \left( i + \frac{2}{k r} - \frac{i}{k^2 r^2} \right) \hat{\boldsymbol{\theta}}+\frac{\sin^2\theta}{r^2} \left( 1 - \frac{2i}{k r} - \frac{i}{k^3 r^3}\right) \hat{\mathbf{r}} \right) e^{ 2 i ( k r - \omega t ) }\right) \end{aligned}\end{aligned}

We can read off the intensity directly

\begin{aligned}\vec{I} = \left\langle{{\vec{S}}}\right\rangle = \frac{A^2 c \sin^2 \theta}{2 \pi r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \end{aligned} \hspace{\stretch{1}}(2.68)

Part 3. Find the power.

Through a surface of radius r, integration of the intensity vector 2.68 is

\begin{aligned}\int r^2 \sin\theta d\theta d\phi\vec{I} &= \int r^2 \sin\theta d\theta d\phi \frac{A^2 c \sin^2 \theta}{2 \pi r^2} \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \\ &= A^2 c \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} \int_0^\pi \sin^3\theta d\theta \\ &= A^2 c \left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}} {\left.\frac{1}{{12}}( \cos(3\theta) - 9 \cos\theta )\right\vert}_0^\pi.\end{aligned}

Our average power through the surface is therefore

\begin{aligned}\int d^2 \boldsymbol{\sigma} \vec{I} =\frac{4 A^2 c }{3}\left( 1 + \frac{2}{k^2 r^2} \right) \hat{\mathbf{r}}.\end{aligned} \hspace{\stretch{1}}(2.69)

Notes on grading of my solution.

Problem 2 above was the graded portion.

FIXME1: I lost a mark in the spot I expected, where I failed to verify one of the Maxwell equations. I’ll still need to figure out what got messed up there.

What occured to me later, also mentioned in the grading of the solution was that Maxwell’s equations in the space-time domain could have been used to solve for {\partial {\mathbf{B}}}/{\partial {t}} instead of all the momentum space logic (which simplified some things, but probably complicated others).

FIXME2: I lost a mark on 2.68 with a big X beside it. I’ll have to read the graded solution to see why.

FIXME3: Lost a mark for the final average power result 2.69. Again, I’ll have to go back and figure out why.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

[3] JD Jackson. Classical Electrodynamics Wiley. John Wiley and Sons, 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 4 (TA: Simon Freedman). Waveguides: confined EM waves.

Posted by peeterjoot on March 14, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation

While this isn’t part of the course, the topic of waveguides is one of so many applications that it is worth a mention, and that will be done in this tutorial.

We will setup our system with a waveguide (conducting surface that confines the radiation) oriented in the \hat{\mathbf{z}} direction. The shape can be arbitrary

PICTURE: cross section of wacky shape.

At the surface of a conductor.

At the surface of the conductor (I presume this means the interior surface where there is no charge or current enclosed) we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} &= - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

If we are talking about the exterior surface, do we need to make any other assumptions (perfect conductors, or constant potentials)?

Wave equations.

For electric and magnetic fields in vacuum, we can show easily that these, like the potentials, separately satisfy the wave equation

Taking curls of the Maxwell curl equations above we have

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{E}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{E}}}{\partial {{t}}^2} \\ \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{B}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{B}}}{\partial {{t}}^2},\end{aligned} \hspace{\stretch{1}}(1.5)

but we have for vector \mathbf{M}

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{M})=\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{M}) - \Delta \mathbf{M},\end{aligned} \hspace{\stretch{1}}(1.7)

which gives us a pair of wave equations

\begin{aligned}\square \mathbf{E} &= 0 \\ \square \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(1.8)

We still have the original constraints of Maxwell’s equations to deal with, but we are free now to pick the complex exponentials as fundamental solutions, as our starting point

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{i k^a x_a} = \mathbf{E}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{i k^a x_a} = \mathbf{B}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) },\end{aligned} \hspace{\stretch{1}}(1.10)

With k_0 = \omega/c and x_0 = c t this is

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.12)

For the vacuum case, with monochromatic light, we treated the amplitudes as constants. Let’s see what happens if we relax this assumption, and allow for spatial dependence (but no time dependence) of \mathbf{E}_0 and \mathbf{B}_0. For the LHS of the electric field curl equation we have

\begin{aligned}0 &= \boldsymbol{\nabla} \times \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + \mathbf{E}_0 \times \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + i \mathbf{E}_0 \times \mathbf{k} ) e^{i k_a x^a}.\end{aligned}

Similarly for the divergence we have

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 ) e^{i k_a x^a}.\end{aligned}

This provides constraints on the amplitudes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 - i \mathbf{k} \times \mathbf{E}_0 &= -i \frac{\omega}{c} \mathbf{B}_0 \\ \boldsymbol{\nabla} \times \mathbf{B}_0 - i \mathbf{k} \times \mathbf{B}_0 &= i \frac{\omega}{c} \mathbf{E}_0 \\ \boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B}_0 - i \mathbf{k} \cdot \mathbf{B}_0 &= 0\end{aligned} \hspace{\stretch{1}}(1.14)

Applying the wave equation operator to our phasor we get

\begin{aligned}0 &=\left(\frac{1}{{c^2}} \partial_{tt} - \boldsymbol{\nabla}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})} \\ &=\left(-\frac{\omega^2}{c^2} - \boldsymbol{\nabla}^2 + \mathbf{k}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})}\end{aligned}

So the momentum space equivalents of the wave equations are

\begin{aligned}\left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(1.18)

Observe that if c^2 \mathbf{k}^2 = \omega^2, then these amplitudes are harmonic functions (solutions to the Laplacian equation). However, it doesn’t appear that we require such a light like relation for the four vector k^a = (\omega/c, \mathbf{k}).

Back to the tutorial notes.

In class we went straight to an assumed solution of the form

\begin{aligned}\mathbf{E} &= \mathbf{E}_0(x, y) e^{ i(\omega t - k z) } \\ \mathbf{B} &= \mathbf{B}_0(x, y) e^{ i(\omega t - k z) },\end{aligned} \hspace{\stretch{1}}(2.20)

where \mathbf{k} = k \hat{\mathbf{z}}. Our Laplacian was also written as the sum of components in the propagation and perpendicular directions

\begin{aligned}\boldsymbol{\nabla}^2 = \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} + \frac{\partial^2 {{}}}{\partial {{z}}^2}.\end{aligned} \hspace{\stretch{1}}(2.22)

With no z dependence in the amplitudes we have

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(2.23)

Separation into components.

It was left as an exercise to separate out our Maxwell equations, so that our field components \mathbf{E}_0 = \mathbf{E}_\perp + \mathbf{E}_z and \mathbf{B}_0 = \mathbf{B}_\perp + \mathbf{B}_z in the propagation direction, and components in the perpendicular direction are separated

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 &=(\boldsymbol{\nabla}_\perp + \hat{\mathbf{z}}\partial_z) \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times (\mathbf{E}_\perp + \mathbf{E}_z) \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=( \hat{\mathbf{x}} \partial_x +\hat{\mathbf{y}} \partial_y ) \times ( \hat{\mathbf{x}} E_x +\hat{\mathbf{y}} E_y ) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=\hat{\mathbf{z}} (\partial_x E_y - \partial_z E_z) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z.\end{aligned}

We can do something similar for \mathbf{B}_0. This allows for a split of 1.14 into \hat{\mathbf{z}} and perpendicular components

\begin{aligned}\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{E}_z - i \mathbf{k} \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_z - i \mathbf{k} \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= i k E_z - \partial_z E_z \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i k B_z - \partial_z B_z.\end{aligned} \hspace{\stretch{1}}(3.25)

So we see that once we have a solution for \mathbf{E}_z and \mathbf{B}_z (by solving the wave equation above for those components), the components for the fields in terms of those components can be found. Alternately, if one solves for the perpendicular components of the fields, these propagation components are available immediately with only differentiation.

In the case where the perpendicular components are taken as given

\begin{aligned}\mathbf{B}_z &= i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp \\ \mathbf{E}_z &= -i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.31)

we can express the remaining ones strictly in terms of the perpendicular fields

\begin{aligned}\frac{\omega}{c} \mathbf{B}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) + \mathbf{k} \times \mathbf{E}_\perp \\ \frac{\omega}{c} \mathbf{E}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) - \mathbf{k} \times \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= -i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp).\end{aligned} \hspace{\stretch{1}}(3.33)

Is it at all helpful to expand the double cross products?

\begin{aligned}\frac{\omega^2}{c^2} \mathbf{B}_\perp &= \boldsymbol{\nabla}_\perp (\boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp) -{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ &= i \frac{c}{\omega}(i k - \partial_z)\boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp)-{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \end{aligned}

This gives us

\begin{aligned}\left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{B}_\perp &= - \frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ \left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{E}_\perp &= -\frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) - \frac{\omega}{c} \mathbf{k} \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.37)

but that doesn’t seem particularly useful for completely solving the system? It appears fairly messy to try to solve for \mathbf{E}_\perp and \mathbf{B}_\perp given the propagation direction fields. I wonder if there is a simplification available that I am missing?

Solving the momentum space wave equations.

Back to the class notes. We proceeded to solve for \mathbf{E}_z and \mathbf{B}_z from the wave equations by separation of variables. We wish to solve equations of the form

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x}}^2} + \frac{\partial^2 {{}}}{\partial {{y}}^2} + \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \phi(x,y) = 0\end{aligned} \hspace{\stretch{1}}(4.39)

Write \phi(x,y) = X(x) Y(y), so that we have

\begin{aligned}\frac{X''}{X} + \frac{Y''}{Y} = \mathbf{k}^2 - \frac{\omega^2}{c^2}\end{aligned} \hspace{\stretch{1}}(4.40)

One solution is sinusoidal

\begin{aligned}\frac{X''}{X} &= -k_1^2 \\ \frac{Y''}{Y} &= -k_2^2 \\ -k_1^2 - k_2^2&= \mathbf{k}^2 - \frac{\omega^2}{c^2}.\end{aligned} \hspace{\stretch{1}}(4.41)

The example in the tutorial now switched to a rectangular waveguide, still oriented with the propagation direction down the z-axis, but with lengths a and b along the x and y axis respectively.

Writing k_1 = 2\pi m/a, and k_2 = 2 \pi n/ b, we have

\begin{aligned}\phi(x, y) = \sum_{m n} a_{m n} \exp\left( \frac{2 \pi i m}{a} x \right)\exp\left( \frac{2 \pi i n}{b} y \right)\end{aligned} \hspace{\stretch{1}}(4.44)

We were also provided with some definitions

\begin{definition}TE (Transverse Electric)

\mathbf{E}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Magnetic)

\mathbf{B}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Electromagnetic)

\mathbf{E}_3 = \mathbf{B}_3 = 0.
\end{definition}

\begin{claim}TEM do not existing in a hollow waveguide.
\end{claim}

Why: I had in my notes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = 0 & \implies \frac{\partial {E_2}}{\partial {x^1}} -\frac{\partial {E_1}}{\partial {x^2}} = 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} = 0 & \implies \frac{\partial {E_1}}{\partial {x^1}} +\frac{\partial {E_2}}{\partial {x^2}} = 0\end{aligned}

and then

\begin{aligned}\boldsymbol{\nabla}^2 \phi &= 0 \\ \phi &= \text{const}\end{aligned}

In retrospect I fail to see how these are connected? What happened to the \partial_t \mathbf{B} term in the curl equation above?

It was argued that we have \mathbf{E}_\parallel = \mathbf{B}_\perp = 0 on the boundary.

So for the TE case, where \mathbf{E}_3 = 0, we have from the separation of variables argument

\begin{aligned}\hat{\mathbf{z}} \cdot \mathbf{B}_0(x, y) =\sum_{m n} a_{m n} \cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.45)

No sines because

\begin{aligned}B_1 \propto \frac{\partial {B_3}}{\partial {x_a}} \rightarrow \cos(k_1 x^1).\end{aligned} \hspace{\stretch{1}}(4.46)

The quantity

\begin{aligned}a_{m n}\cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.47)

is called the TE_{m n} mode. Note that since B = \text{const} an ampere loop requires \mathbf{B} = 0 since there is no current.

Writing

\begin{aligned}k &= \frac{\omega}{c} \sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 } \\ \omega_{m n} &= 2 \pi c \sqrt{ \left(\frac{m}{a} \right)^2 + \left(\frac{n}{b} \right)^2 }\end{aligned} \hspace{\stretch{1}}(4.48)

Since \omega < \omega_{m n} we have k purely imaginary, and the term

\begin{aligned}e^{-i k z} = e^{- {\left\lvert{k}\right\rvert} z}\end{aligned} \hspace{\stretch{1}}(4.50)

represents the die off.

\omega_{10} is the smallest.

Note that the convention is that the m in TE_{m n} is the bigger of the two indexes, so \omega > \omega_{10}.

The phase velocity

\begin{aligned}V_\phi = \frac{\omega}{k} = \frac{c}{\sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 }} \ge c\end{aligned} \hspace{\stretch{1}}(4.51)

However, energy is transmitted with the group velocity, the ratio of the Poynting vector and energy density

\begin{aligned}\frac{\left\langle{\mathbf{S}}\right\rangle}{\left\langle{{U}}\right\rangle} = V_g = \frac{\partial {\omega}}{\partial {k}} = 1/\frac{\partial {k}}{\partial {\omega}}\end{aligned} \hspace{\stretch{1}}(4.52)

(This can be shown).

Since

\begin{aligned}\left(\frac{\partial {k}}{\partial {\omega}}\right)^{-1} = \left(\frac{\partial {}}{\partial {\omega}}\sqrt{ (\omega/c)^2 - (\omega_{m n}/c)^2 }\right)^{-1} = c \sqrt{ 1 - (\omega_{m n}/\omega)^2 } \le c\end{aligned} \hspace{\stretch{1}}(4.53)

We see that the energy is transmitted at less than the speed of light as expected.

Final remarks.

I’d started converting my handwritten scrawl for this tutorial into an attempt at working through these ideas with enough detail that they self contained, but gave up part way. This appears to me to be too big of a sub-discipline to give it justice in one hours class. As is, it is enough to at least get an concept of some of the ideas involved. I think were I to learn this for real, I’d need a good text as a reference (or the time to attempt to blunder through the ideas in much much more detail).

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 18 (Taught by Prof. Erich Poppitz). Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 12, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 136-146: continued reminder of electrostatic Green’s function (136); the retarded Green’s function of the d’Alembert operator: derivation and properties (137-140); the solution of the d’Alembert equation with a source: retarded potentials (141-142)

Solving the forced wave equation.

See the notes for a complex variables and Fourier transform method of deriving the Green’s function. In class, we’ll just pull it out of a magic hat. We wish to solve

\begin{aligned}\square A^k = \partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(2.1)

(with a \partial_i A^i = 0 gauge choice).

Our Green’s method utilizes

\begin{aligned}\square_{(\mathbf{x}, t)} G(\mathbf{x} - \mathbf{x}', t - t') = \delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t')\end{aligned} \hspace{\stretch{1}}(2.2)

If we know such a function, our solution is simple to obtain

\begin{aligned}A^k(\mathbf{x}, t)= \int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t') G(\mathbf{x} - \mathbf{x}', t - t')\end{aligned} \hspace{\stretch{1}}(2.3)

Proof:

\begin{aligned}\square_{(\mathbf{x}, t)} A^k(\mathbf{x}, t)&=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\square_{(\mathbf{x}, t)}G(\mathbf{x} - \mathbf{x}', t - t') \\ &=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t') \\ &=\frac{4 \pi}{c} j^k(\mathbf{x}, t)\end{aligned}

Claim:

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned} \hspace{\stretch{1}}(2.4)

This is the retarded Green’s function of the operator \square, where

\begin{aligned}\square G(\mathbf{x}, t) = \delta^3(\mathbf{x}) \delta(t)\end{aligned} \hspace{\stretch{1}}(2.5)

Proof of the d’Alembertian Green’s function

Our Prof is excellent at motivating any results that he pulls out of magic hats. He’s said that he’s included a derivation using Fourier transforms and tricky contour integration arguments in the class notes for anybody who is interested (and for those who also know how to do contour integration). For those who don’t know contour integration yet (some people are taking it concurrently), one can actually prove this by simply applying the wave equation operator to this function. This treats the delta function as a normal function that one can take the derivatives of, something that can be well defined in the context of generalized functions. Chugging ahead with this approach we have

\begin{aligned}\square G(\mathbf{x}, t)=\left(\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta\right)\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi c^2 {\left\lvert{\mathbf{x}}\right\rvert} }- \Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }.\end{aligned} \hspace{\stretch{1}}(2.6)

This starts things off and now things get a bit hairy. It’s helpful to consider a chain rule expansion of the Laplacian

\begin{aligned}\Delta (u v)&=\partial_{\alpha\alpha} (u v) \\ &=\partial_{\alpha} (v \partial_\alpha u+ u\partial_\alpha v) \\ &=(\partial_\alpha v) (\partial_\alpha u ) + v \partial_{\alpha\alpha} u+(\partial_\alpha u) (\partial_\alpha v ) + u \partial_{\alpha\alpha} v).\end{aligned}

In vector form this is

\begin{aligned}\Delta (u v) = u \Delta v + 2 (\boldsymbol{\nabla} u) \cdot (\boldsymbol{\nabla} v) + v \Delta u.\end{aligned} \hspace{\stretch{1}}(2.7)

Applying this to the Laplacian portion of 2.6 we have

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)\Delta\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}+\left(\boldsymbol{\nabla} \frac{1}{{2 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\right)\cdot\left(\boldsymbol{\nabla}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \right)+\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\Delta\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right).\end{aligned} \hspace{\stretch{1}}(2.8)

Here we make the identification

\begin{aligned}\Delta \frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }} = - \delta^3(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.9)

This could be considered a given from our knowledge of electrostatics, but it’s not too much work to just do so.

An aside. Proving the Laplacian Green’s function.

If -1/{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} } is a Green’s function for the Laplacian, then the Laplacian of the convolution of this with a test function should recover that test function

\begin{aligned}\Delta \int d^3 \mathbf{x}' \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right) f(\mathbf{x}') = f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.10)

We can directly evaluate the LHS of this equation, following the approach in [2]. First note that the Laplacian can be pulled into the integral and operates only on the presumed Green’s function. For that operation we have

\begin{aligned}\Delta \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right)=-\frac{1}{{4 \pi}} \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(2.11)

It will be helpful to compute the gradient of various powers of {\left\lvert{\mathbf{x}}\right\rvert}

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}^a&=e_\alpha \partial_\alpha (x^\beta x^\beta)^{a/2} \\ &=e_\alpha \left(\frac{a}{2}\right) 2 x^\beta {\delta_\beta}^\alpha {\left\lvert{\mathbf{x}}\right\rvert}^{a - 2}.\end{aligned}

In particular we have, when \mathbf{x} \ne 0, this gives us

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} &= \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &= -\frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^3}} &= -3 \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^5}.\end{aligned} \hspace{\stretch{1}}(2.12)

For the Laplacian of 1/{\left\lvert{\mathbf{x}}\right\rvert}, at the points \mathbf{e} \ne 0 where this is well defined we have

\begin{aligned}\Delta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &=\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} \\ &= -\partial_\alpha \frac{x^\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - x^\alpha \partial_\alpha \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - \mathbf{x} \cdot \boldsymbol{\nabla} \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} + 3 \frac{\mathbf{x}^2}{{\left\lvert{\mathbf{x}}\right\rvert}^5}\end{aligned}

So we have a zero. This means that the Laplacian operation

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \Delta \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}},\end{aligned} \hspace{\stretch{1}}(2.15)

can only have a value in a neighborhood of point \mathbf{x}. Writing \Delta = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla} \cdot -\frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.16)

Observing that \boldsymbol{\nabla} \cdot f(\mathbf{x} -\mathbf{x}') = -\boldsymbol{\nabla}' f(\mathbf{x} - \mathbf{x}') we can put this in a form that allows for use of Stokes theorem so that we can convert this to a surface integral

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla}' \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^2 \mathbf{x}' \mathbf{n} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= \int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\mathbf{x}' - \mathbf{x}}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= -\int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\epsilon^2}{\epsilon^4}\end{aligned}

where we use (\mathbf{x}' - \mathbf{x})/{\left\lvert{\mathbf{x}' - \mathbf{x}}\right\rvert} as the outwards normal for a sphere centered at \mathbf{x} of radius \epsilon. This integral is just -4 \pi, so we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{-4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.17)

The convolution of f(\mathbf{x}) with -\Delta/4 \pi {\left\lvert{\mathbf{x}}\right\rvert} produces f(\mathbf{x}), allowing an identification of this function with a delta function, since the two have the same operational effect

\begin{aligned}\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.18)

Returning to the d’Alembertian Green’s function.

We need two additional computations to finish the job. The first is the gradient of the delta function

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ? \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ?\end{aligned}

Consider \boldsymbol{\nabla} f(g(\mathbf{x})). This is

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))&=e_\alpha \frac{\partial {f(g(\mathbf{x}))}}{\partial {x^\alpha}} \\ &=e_\alpha \frac{\partial {f}}{\partial {g}} \frac{\partial {g}}{\partial {x^\alpha}},\end{aligned}

so we have

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))=\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g.\end{aligned} \hspace{\stretch{1}}(2.19)

The Laplacian is similar

\begin{aligned}\Delta f(g)&= \boldsymbol{\nabla} \cdot \left(\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g \right) \\ &= \partial_\alpha \left(\frac{\partial {f}}{\partial {g}} \partial_\alpha g \right) \\ &= \left( \partial_\alpha \frac{\partial {f}}{\partial {g}} \right) \partial_\alpha g +\frac{\partial {f}}{\partial {g}} \partial_{\alpha\alpha} g  \\ &= \frac{\partial^2 {{f}}}{\partial {{g}}^2} \left( \partial_\alpha g \right) (\partial_\alpha g)+\frac{\partial {f}}{\partial {g}} \Delta g,\end{aligned}

so we have

\begin{aligned}\Delta f(g)= \frac{\partial^2 {{f}}}{\partial {{g}}^2} (\boldsymbol{\nabla} g)^2 +\frac{\partial {f}}{\partial {g}} \Delta g\end{aligned} \hspace{\stretch{1}}(2.20)

With g(\mathbf{x}) = {\left\lvert{\mathbf{x}}\right\rvert}, we’ll need the Laplacian of this vector magnitude

\begin{aligned}\Delta {\left\lvert{\mathbf{x}}\right\rvert}&=\partial_\alpha \frac{x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} + x_\alpha \partial_\alpha (x^\beta x^\beta)^{-1/2} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} - \frac{x_\alpha x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned}

So that we have

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &=\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned} \hspace{\stretch{1}}(2.21)

Now we have all the bits and pieces of 2.8 ready to assemble

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }&=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \\ &\quad +\frac{1}{{2\pi}} \left( - \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \right)\cdot-\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &\quad +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\left(\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \\ &=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2 }}\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \end{aligned}

Since we also have

\begin{aligned}\frac{1}{{c^2}} \partial_{tt}\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2}\end{aligned} \hspace{\stretch{1}}(2.23)

The \delta'' terms cancel out in the d’Alembertian, leaving just

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.24)

Noting that the spatial delta function is non-zero only when \mathbf{x} = 0, which means \delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c) = \delta(t) in this product, and we finally have

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta(t) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.25)

We write

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} },\end{aligned} \hspace{\stretch{1}}(2.26)

Elaborating on the wave equation Green’s function

The Green’s function 2.26 is a distribution that is non-zero only on the future lightcone. Observe that for t < 0 we have

\begin{aligned}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)&=\delta\left(-{\left\lvert{t}\right\rvert} - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \\ &= 0.\end{aligned}

We say that G is supported only on the future light cone. At \mathbf{x} = 0, only the contributions for t > 0 matter. Note that in the “old days”, Green’s functions used to be called influence functions, a name that works particularly well in this case. We have other Green’s functions for the d’Alembertian. The one above is called the retarded Green’s functions and we also have an advanced Green’s function. Writing + for advanced and - for retarded these are

\begin{aligned}G_{\pm} = \frac{\delta\left(t \pm \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.27)

There are also causal and non-causal variations that won’t be of interest for this course.

This arms us now to solve any problem in the Lorentz gauge

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' dt' \frac{\delta\left(t - t' - \frac{{\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}j^k(\mathbf{x}', t')+\text{An arbitrary collection of EM waves.}\end{aligned} \hspace{\stretch{1}}(3.28)

The additional EM waves are the possible contributions from the homogeneous equation.

Since \delta(t - t' - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c) is non-zero only when t' = t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c), the non-homogeneous parts of 3.28 reduce to

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' \frac{j^k(\mathbf{x}', t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(3.29)

Our potentials at time t and spatial position \mathbf{x} are completely specified in terms of the sums of the currents acting at the retarded time t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c. The field can only depend on the charge and current distribution in the past. Specifically, it can only depend on the charge and current distribution on the past light cone of the spacetime point at which we measure the field.

Example of the Green’s function. Consider a charged particle moving on a worldline

\begin{aligned}(c t, \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.30)

(c for classical)

For this particle

\begin{aligned}\rho(\mathbf{x}, t) &= e \delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ \mathbf{j}(\mathbf{x}, t) &= e \dot{\mathbf{x}}_c(t) \delta^3(\mathbf{x} - \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.31)

\begin{aligned}\begin{bmatrix}A^0(\mathbf{x}, t)\mathbf{A}(\mathbf{x}, t)\end{bmatrix}&=\frac{1}{{c}}\int d^3 \mathbf{x}' dt'\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\begin{bmatrix}c e \\ e \dot{\mathbf{x}}_c(t)\end{bmatrix}\delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ &=\int_{-\infty}^\infty\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c }{{\left\lvert{\mathbf{x}_c(t') - \mathbf{x}}\right\rvert}}\begin{bmatrix}e \\ e \frac{\dot{\mathbf{x}}_c(t)}{c}\end{bmatrix}\end{aligned}

PICTURE: light cones, and curved worldline. Pick an arbitrary point (\mathbf{x}_0, t_0), and draw the past light cone, looking at where this intersects with the trajectory

For the arbitrary point (\mathbf{x}_0, t_0) we see that this point and the retarded time (\mathbf{x}_c(t_r), t_r) obey the relation

\begin{aligned}c (t_0 - t_r) = {\left\lvert{\mathbf{x}_0 - \mathbf{x}_c(t_r)}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.33)

This retarded time is unique. There is only one such intersection.

Our job is to calculate

\begin{aligned}\int_{-\infty}^\infty \delta(f(x)) g(x) = \frac{g(x_{*})}{f'(x_{*})}\end{aligned} \hspace{\stretch{1}}(4.34)

where f(x_{*}) = 0.

\begin{aligned}f(t') = t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c\end{aligned} \hspace{\stretch{1}}(4.35)

\begin{aligned}\frac{\partial {f}}{\partial {t'}}&= -1 - \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \sqrt{ (\mathbf{x} - \mathbf{x}_c(t')) \cdot (\mathbf{x} - \mathbf{x}_c(t')) } \\ &= -1 + \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \frac{(\mathbf{x} - \mathbf{x}_c(t')) \cdot \mathbf{v}_c(t_r)}{{\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert}}\end{aligned}

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 5 (TA: Simon Freedman). Angular momentum of EM fields

Posted by peeterjoot on March 10, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

Long solenoid of radius R, n turns per unit length, current I. Coaxial with with solenoid are two long cylindrical shells of length l and (\text{radius},\text{charge}) of (a, Q), and (b, -Q) respectively, where a < b.

When current is gradually reduced what happens?

The initial fields.

Initial Magnetic field.

For the initial static conditions where we have only a (constant) magnetic field, the Maxwell-Ampere equation takes the form

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \frac{4 \pi}{c} \mathbf{j}\end{aligned} \hspace{\stretch{1}}(1.1)

\paragraph{On the name of this equation}. In notes from one of the lectures I had this called Maxwell-Faraday equation, despite the fact that this isn’t the one that Maxwell made his displacement current addition. Did the Professor call it that, or was this my addition? In [2] Faraday’s law is also called the Maxwell-Faraday equation. [1] calls this the Ampere-Maxwell equation, which makes more sense.

Put into integral form by integrating over an open surface we have

\begin{aligned}\int_A (\boldsymbol{\nabla} \times \mathbf{B}) \cdot d\mathbf{a} = \frac{4 \pi}{c} \int_A \mathbf{j} \cdot d\mathbf{a}\end{aligned} \hspace{\stretch{1}}(1.2)

The current density passing through the surface is defined as the enclosed current, circulating around the bounding loop

\begin{aligned}I_{\text{enc}} = \int_A \mathbf{j} \cdot d\mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.3)

so by Stokes Theorem we write

\begin{aligned}\int_{\partial A} \mathbf{B} \cdot d\mathbf{l} = \frac{4 \pi}{c} I_{\text{enc}}\end{aligned} \hspace{\stretch{1}}(1.4)

Now consider separately the regions inside and outside the cylinder. Inside we have

\begin{aligned}\int_{\partial A} B \cdot d \mathbf{l} = \frac{4 \pi I }{c} = 0,\end{aligned} \hspace{\stretch{1}}(1.5)

Outside of the cylinder we have the equivalent of n loops, each with current I, so we have

\begin{aligned}\int \mathbf{B} \cdot d\mathbf{l} = \frac{4 \pi n I L}{c} = B L.\end{aligned} \hspace{\stretch{1}}(1.6)

Our magnetic field is constant while I is constant, and in vector form this is

\begin{aligned}\mathbf{B} = \frac{4 \pi n I}{c} \hat{\mathbf{z}}\end{aligned} \hspace{\stretch{1}}(1.7)

Initial Electric field.

How about the electric fields?

For $latex r b$ we have \mathbf{E} = 0 since there is no charge enclosed by any Gaussian surface that we choose.

Between a and b we have, for a Gaussian surface of height l (assuming that l \gg a)

\begin{aligned}E (2 \pi r) l = 4 \pi (+Q),\end{aligned} \hspace{\stretch{1}}(1.8)

so we have

\begin{aligned}\mathbf{E} = \frac{2 Q }{r l} \hat{\mathbf{r}}.\end{aligned} \hspace{\stretch{1}}(1.9)

Poynting vector before the current changes.

Our Poynting vector, the energy flux per unit time, is

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} (\mathbf{E} \times \mathbf{B})\end{aligned} \hspace{\stretch{1}}(1.10)

This is non-zero only in the region both between the solenoid and the enclosing cylinder (radius b) since that’s the only place where both \mathbf{E} and \mathbf{B} are non-zero. That is

\begin{aligned}\mathbf{S} &= \frac{c}{4 \pi} (\mathbf{E} \times \mathbf{B}) \\ &=\frac{c}{4 \pi} \frac{2 Q }{r l} \frac{4 \pi n I}{c} \hat{\mathbf{r}} \times \hat{\mathbf{z}} \\ &= -\frac{2 Q n I}{r l} \hat{\boldsymbol{\phi}}\end{aligned}

(since \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} = \hat{\mathbf{z}}, so \hat{\mathbf{z}} \times \hat{\mathbf{r}} = \hat{\boldsymbol{\phi}} after cyclic permutation)

A motivational aside: Momentum density.

Suppose {\left\lvert{\mathbf{E}}\right\rvert} = {\left\lvert{\mathbf{B}}\right\rvert}, then our Poynting vector is

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} = \frac{ c \hat{\mathbf{k}}}{4 \pi} \mathbf{E}^2,\end{aligned} \hspace{\stretch{1}}(1.11)

but

\begin{aligned}\mathcal{E} = \text{energy density} = \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} = \frac{\mathbf{E}^2}{4 \pi},\end{aligned} \hspace{\stretch{1}}(1.12)

so

\begin{aligned}\mathbf{S} = c \hat{\mathbf{k}} \mathcal{E} = \mathbf{v} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.13)

Now recall the between (relativistic) mechanical momentum \mathbf{p} = \gamma m \mathbf{v} and energy \mathcal{E} = \gamma m c^2

\begin{aligned}\mathbf{p} = \frac{\mathbf{v}}{c^2} \mathcal{E}.\end{aligned} \hspace{\stretch{1}}(1.14)

This justifies calling the quantity

\begin{aligned}\mathbf{P}_{\text{EM}} = \frac{\mathbf{S}}{c^2},\end{aligned} \hspace{\stretch{1}}(1.15)

the momentum density.

Momentum density of the EM fields.

So we label our scaled Poynting vector the momentum density for the field

\begin{aligned}\mathbf{P}_{\text{EM}} = -\frac{2 Q n I}{c^2 r l} \hat{\boldsymbol{\phi}},\end{aligned} \hspace{\stretch{1}}(1.16)

and can now compute an angular momentum density in the field between the solenoid and the outer cylinder prior to changing the currents

\begin{aligned}\mathbf{L}_{\text{EM}}&= \mathbf{r} \times \mathbf{P}_{\text{EM}} \\ &= r \hat{\mathbf{r}} \times \mathbf{P}_{\text{EM}} \\ \end{aligned}

This gives us

\begin{aligned}\mathbf{L}_{\text{EM}} = -\frac{2 Q n I}{c^2 l} \hat{\mathbf{z}} = \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.17)

Note that this is the angular momentum density in the region between the solenoid and the inner cylinder, between z = 0 and z = l. Outside of this region, the angular momentum density is zero.

After the current is changed

Induced electric field

When we turn off (or change) I, some of the magnetic field \mathbf{B} will be converted into electric field \mathbf{E} according to Faraday’s law

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.18)

In integral form, utilizing an open surface, this is

\begin{aligned}\int_A (\boldsymbol{\nabla} \times \mathbf{l}) \cdot \hat{\mathbf{n}} dA&=\int_{\partial A} \mathbf{E} \cdot d\mathbf{l} \\ &= - \frac{1}{{c}} \int_A \frac{\partial {\mathbf{B}}}{\partial {t}} \cdot d\mathbf{A} \\ &= - \frac{1}{{c}} \frac{\partial {\Phi_B(t)}}{\partial {t}},\end{aligned}

where we introduce the magnetic flux

\begin{aligned}\Phi_B(t) = \int_A \mathbf{B} \cdot d\mathbf{A}.\end{aligned} \hspace{\stretch{1}}(1.19)

We can utilizing a circular surface cutting directly across the cylinder perpendicular to \hat{\mathbf{z}} of radius r. Recall that we have the magnetic field 1.7 only inside the solenoid. So for r < R this flux is

\begin{aligned}\Phi_B(t)&= \int_A \mathbf{B} \cdot d\mathbf{A} \\ &= (\pi r^2) \frac{4 \pi n I(t)}{c}.\end{aligned}

For r > R only the portion of the surface with radius r \le R contributes to the flux

\begin{aligned}\Phi_B(t)&= \int_A \mathbf{B} \cdot d\mathbf{A} \\ &= (\pi R^2) \frac{4 \pi n I(t)}{c}.\end{aligned}

We can now compute the circulation of the electric field

\begin{aligned}\int_{\partial A} \mathbf{E} \cdot d\mathbf{l} = - \frac{1}{{c}} \frac{\partial {\Phi_B(t)}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(1.20)

by taking the derivatives of the magnetic flux. For r > R this is

\begin{aligned}\int_{\partial A} \mathbf{E} \cdot d\mathbf{l}&= (2 \pi r) E \\ &=-(\pi R^2) \frac{4 \pi n \dot{I}(t)}{c^2}.\end{aligned}

This gives us the magnitude of the induced electric field

\begin{aligned}E&= -(\pi R^2) \frac{4 \pi n \dot{I}(t)}{2 \pi r c^2} \\ &= -\frac{2 \pi R^2 n \dot{I}(t)}{r c^2}.\end{aligned}

Similarly for r < R we have

\begin{aligned}E = -\frac{2 \pi r n \dot{I}(t)}{c^2}\end{aligned} \hspace{\stretch{1}}(1.21)

Summarizing we have

\begin{aligned}\mathbf{E} =\left\{\begin{array}{l l}-\frac{2 \pi r n \dot{I}(t)}{c^2} \hat{\boldsymbol{\phi}} 		& \mbox{For latex r R$}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.22)$

Torque and angular momentum induced by the fields.

Our torque \mathbf{N} = \mathbf{r} \times \mathbf{F} = d\mathbf{L}/dt on the outer cylinder (radius b) that is induced by changing the current is

\begin{aligned}\mathbf{N}_b&= (b \hat{\mathbf{r}}) \times (-Q \mathbf{E}_{r = b}) \\ &= b Q \frac{2 \pi R^2 n \dot{I}(t)}{b c^2} \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} \\ &= \frac{1}{{c^2}} 2 \pi R^2 n Q \dot{I} \hat{\mathbf{z}}.\end{aligned}

This provides the induced angular momentum on the outer cylinder

\begin{aligned}\mathbf{L}_b&= \int dt \mathbf{N}_b = \frac{ 2 \pi n R^2 Q}{c^2} \int_I^0 \frac{dI}{dt} dt \\ &= -\frac{2 \pi n R^2 Q}{c^2} I.\end{aligned}

This is the angular momentum of b induced by changing the current or changing the magnetic field.

On the inner cylinder we have

\begin{aligned}\mathbf{N}_a&= (a \hat{\mathbf{r}} ) \times (Q \mathbf{E}_{r = a}) \\ &= a Q \left(- \frac{2 \pi}{c} n a \dot{I} \right) \hat{\mathbf{r}} \times \hat{\boldsymbol{\phi}} \\ &= -\frac{2 \pi n a^2 Q \dot{I}}{c^2} \hat{\mathbf{z}}.\end{aligned}

So our induced angular momentum on the inner cylinder is

\begin{aligned}\mathbf{L}_a = \frac{2 \pi n a^2 Q I}{c^2} \hat{\mathbf{z}}.\end{aligned} \hspace{\stretch{1}}(1.23)

The total angular momentum in the system has to be conserved, and we must have

\begin{aligned}\mathbf{L}_a + \mathbf{L}_b = -\frac{2 n I Q}{c^2} \pi (R^2 - a^2) \hat{\mathbf{z}}.\end{aligned} \hspace{\stretch{1}}(1.24)

At the end of the tutorial, this sum was equated with the field angular momentum density \mathbf{L}_{\text{EM}}, but this has different dimensions. In fact, observe that the volume in which this angular momentum density is non-zero is the difference between the volume of the solenoid and the inner cylinder

\begin{aligned}V = \pi R^2 l - \pi a^2 l,\end{aligned} \hspace{\stretch{1}}(1.25)

so if we are to integrate the angular momentum density 1.17 over this region we have

\begin{aligned}\int \mathbf{L}_{\text{EM}} dV = -\frac{2 Q n I}{c^2} \pi (R^2 - a^2) \hat{\mathbf{z}}\end{aligned} \hspace{\stretch{1}}(1.26)

which does match with the sum of the mechanical angular momentum densities 1.24 as expected.

References

[1] D. Fleisch. A Student’s Guide to Maxwell’s Equations. Cambridge University Press, 2007. “http://www4.wittenberg.edu/maxwell/index.html“.

[2] Wikipedia. Faraday’s law of induction — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 10-March-2011]. http://en.wikipedia.org/w/index.php?title=Faraday\%27s_law_of_induction&oldid=416715237.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 17 (Taught by Prof. Erich Poppitz). Energy and momentum density. Starting a Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 8, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 6 material \S 31, and starting chapter 8 material from the text [1].

Covering lecture notes pp. 128-135: energy flux and momentum density of the EM wave (128-129); radiation pressure, its discovery and significance in physics (130-131); EM fields of moving charges: setting up the wave equation with a source (132-133); the convenience of Lorentz gauge in the study of radiation (134); reminder on Green’s functions from electrostatics (135) [Tuesday, Mar. 8]

Review. Energy density and Poynting vector.

Last time we showed that Maxwell’s equations imply

\begin{aligned}\frac{\partial }{\partial t} \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = -\mathbf{j} \cdot \mathbf{E} - \boldsymbol{\nabla} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.1)

In the lecture, Professor Poppitz said he was free here to use a full time derivative. When asked why, it was because he was considering \mathbf{E} and \mathbf{B} here to be functions of time only, since they were measured at a fixed point in space. This is really the same thing as using a time partial, so in these notes I’ll just be explicit and stick to using partials.

\begin{aligned}\mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\frac{\partial }{\partial {t}} \int_V \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} = - \int_V \mathbf{j} \cdot \mathbf{E} - \int_{\partial_V} d^2 \boldsymbol{\sigma} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.3)

Any change in the energy must either due to currents, or energy escaping through the surface.

\begin{aligned}\mathcal{E} = \frac{\mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} &= \mbox{Energy density of the EM field} \\ \mathbf{S} = \frac{c}{4 \pi} \mathbf{E} \times \mathbf{B} &= \mbox{Energy flux of the EM fields}\end{aligned} \hspace{\stretch{1}}(2.4)

The energy flux of the EM field: this is the energy flowing through d^2 \mathbf{A} in unit time (\mathbf{S} \cdot d^2 \mathbf{A}).

How about electromagnetic waves?

In a plane wave moving in direction \mathbf{k}.

PICTURE: \mathbf{E} \parallel \hat{\mathbf{z}}, \mathbf{B} \parallel \hat{\mathbf{x}}, \mathbf{k} \parallel \hat{\mathbf{y}}.

So, \mathbf{S} \parallel \mathbf{k} since \mathbf{E} \times \mathbf{B} \propto \mathbf{k}.

{\left\lvert{\mathbf{S}}\right\rvert} for a plane wave is the amount of energy through unit area perpendicular to \mathbf{k} in unit time.

Recall that we calculated

\begin{aligned}\mathbf{B} &= (\mathbf{k} \times \boldsymbol{\beta}) \sin(\omega t - \mathbf{k} \cdot \mathbf{x}) \\ \mathbf{E} &= \boldsymbol{\beta} {\left\lvert{\mathbf{k}}\right\rvert} \sin(\omega t - \mathbf{k} \cdot \mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.6)

Since we had \mathbf{k} \cdot \boldsymbol{\beta} = 0, we have {\left\lvert{\mathbf{E}}\right\rvert} = {\left\lvert{\mathbf{B}}\right\rvert}, and our Poynting vector follows nicely

\begin{aligned}\mathbf{S} &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} \frac{c}{4 \pi} \mathbf{E}^2  \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} c \frac{\mathbf{E}^2 + \mathbf{B}^2}{8 \pi} \\ &= \frac{\mathbf{k}}{{\left\lvert{\mathbf{k}}\right\rvert}} e \mathcal{E}\end{aligned}

\begin{aligned}[\mathbf{S}] = \frac{\text{energy}}{\text{time latex \times$ area}} = \frac{\text{momentum \times speed}}{\text{time \times area}}\end{aligned} \hspace{\stretch{1}}(3.8)$

\begin{aligned}\left[\frac{\mathbf{S}}{c^2} \right] &= \frac{\text{momentum}}{\text{time latex \times$ area \times speed}} \\ &= \frac{\text{momentum}}{\text{area \times distance}} \\ &= \frac{\text{momentum}}{\text{volume}} \\ \end{aligned} $

So we wee that \mathbf{S}/c^2 is indeed rightly called “the momentum density” of the EM field.

We will later find that \mathcal{E} and \mathbf{S} are components of a rank-2 four tensor

\begin{aligned}T^{ij} = \begin{bmatrix}\mathcal{E} & \frac{S^1}{c^2} & \frac{S^2}{c^2} & \frac{S^3}{c^2} \\ \frac{S^1}{c^2} & & & \\ \frac{S^1}{c^2} & & \begin{bmatrix}\sigma^{\alpha\beta} \end{bmatrix}& \\ \frac{S^1}{c^2} & & & \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.9)

where \sigma^{\alpha\beta} is the stress tensor. We will get to all this in more detail later.

For EM wave we have

\begin{aligned}\mathbf{S} = \hat{\mathbf{k}} c \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.10)

(this is the energy flux)

\begin{aligned}\frac{\mathbf{S}}{c^2} = \hat{\mathbf{k}} \frac{\mathcal{E}}{c}\end{aligned} \hspace{\stretch{1}}(3.11)

(the momentum density of the wave).

\begin{aligned}c {\left\lvert{\frac{\mathbf{S}}{c^2}}\right\rvert} = \mathcal{E}\end{aligned} \hspace{\stretch{1}}(3.12)

(recall \mathcal{E} = c\mathcal{\mathbf{p}} for massless particles.

EM waves carry energy and momentum so when absorbed or reflected these are transferred to bodies.

Kepler speculated that this was the fact because he had observed that the tails of the comets were being pushed by the sunlight, since the tails faced away from the sun.

Maxwell also suggested that light would extort a force (presumably he wrote down the “Maxwell stress tensor” T^{ij} that is named after him).

This was actually measured later in 1901, by Peter Lebedev (Russia).

PICTURE: pole with flags in vacuum jar. Black (absorber) on one side, and Silver (reflector) on the other. Between the two of these, momentum conservation will introduce rotation (in the direction of the silver).

This is actually a tricky experiment and requires the vacuum, since the black surface warms up, and heats up the nearby gas molecules, which causes a rotation in the opposite direction due to just these thermal effects.

Another example (a factor) that prevents star collapse under gravitation is the radiation pressure of the light.

Moving on. Solving Maxwell’s equation

Our equations are

\begin{aligned}\epsilon^{i j k l} \partial_j F_{k l} &= 0 \\ \partial_i F^{i k} &= \frac{4 \pi}{c} j^k,\end{aligned} \hspace{\stretch{1}}(4.13)

where we assume that j^k(\mathbf{x}, t) is a given. Our task is to find F^{i k}, the (\mathbf{E}, \mathbf{B}) fields.

Proceed by finding A^i. First, as usual when F_{i j} = \partial_i A_j - \partial_j A_i. The Bianchi identity is satisfied so we focus on the current equation.

In terms of potentials

\begin{aligned}\partial_i (\partial^i A^k - \partial^k A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.15)

or

\begin{aligned}\partial_i \partial^i A^k - \partial^k (\partial_i A^i) = \frac{ 4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.16)

We want to work in the Lorentz gauge \partial_i A^i = 0. This is justified by the simplicity of the remaining problem

\begin{aligned}\partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.17)

Write

\begin{aligned}\partial_i \partial^i = \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} - \Delta = \square\end{aligned} \hspace{\stretch{1}}(4.18)

where

\begin{aligned}\Delta = \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2}\end{aligned} \hspace{\stretch{1}}(4.19)

This \square is the d’Alembert operator (“d’Alembertian”).

Our equation is

\begin{aligned}\square A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(4.20)

(in the Lorentz gauge)

If we learn how to solve (**), then we’ve learned all.

Method: Green’s function’s

In electrostatics where j^0 = 0, A^0 \ne 0 only, we have

\begin{aligned}\Delta A^0 = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(4.21)

Solution

\begin{aligned}\Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') = \delta^3( \mathbf{x} - \mathbf{x}')\end{aligned} \hspace{\stretch{1}}(4.22)

PICTURE:

\begin{aligned}\rho(\mathbf{x}') d^3 \mathbf{x}'\end{aligned} \hspace{\stretch{1}}(4.23)

(a small box)

acting through distance {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, acting at point \mathbf{x}. With G(\mathbf{x}, \mathbf{x}') = 1/4 \pi{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}, we have

\begin{aligned}\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \int d^3 \mathbf{x}' \delta^3( \mathbf{x} - \mathbf{x}') \rho(\mathbf{x}') \\ &= \rho(\mathbf{x})\end{aligned}

Also since G is deemed a linear operator, we have \Delta_\mathbf{x} G = G \Delta_\mathbf{x}, we find

\begin{aligned}\rho(\mathbf{x})&=\int d^3 \mathbf{x}' \Delta_{\mathbf{x}} G(\mathbf{x} - \mathbf{x}') 4 \pi \rho(\mathbf{x}') \\ &=\int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}} \rho(\mathbf{x}').\end{aligned}

We end up finding that

\begin{aligned}\phi(\mathbf{x}) = \int \frac{\rho(\mathbf{x}')}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(4.24)

thus solving the problem. We wish next to do this for the Maxwell equation 4.20.

The Green’s function method is effective, but I can’t help but consider it somewhat of a cheat, since one has to through higher powers know what the Green’s function is. In the electrostatics case, at least we can work from the potential function and take it’s Laplacian to find that this is equivalent (thus implictly solving for the Green’s function at the same time). It will be interesting to see how we do this for the forced d’Alembertian equation.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

Collection of PHY450HS1 (Relativistic Electrodynamics) notes so far.

Posted by peeterjoot on March 4, 2011

I’ve collected all my class notes and problems so far, covering material preceding the midterm, into a single pdf for convenience.

This includes all of the following individual bits previously posted.

Mar 2, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 16 (Taught by Prof. Erich Poppitz). Monochromatic EM fields. Poynting vector and energy density conservation.
PHY450H1S. Relativistic Electrodynamics Lecture 16 (Taught by Prof. Erich Poppitz). Monochromatic EM fields. Poynting vector and energy density conservation.

Mar 1, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 15 (Taught by Prof. Erich Poppitz). Fourier solution of Maxwell’s vacuum wave equation in the Coulomb gauge.
Fourier solution of Maxwell’s vacuum wave equation in the Coulomb gauge.

Feb 16, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 14 (Taught by Simon Freedman). Wave equation in Coulomb and Lorentz gauges.
Wave equation in Coulomb and Lorentz gauges.

Feb 15, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 13 (Taught by Prof. Erich Poppitz). Variational principle for the field.
PHY450H1S. Relativistic Electrodynamics Lecture 13 (Taught by Prof. Erich Poppitz). Variational principle for the field.

Feb 15, 2011 PHY450H1S Problem Set 3.
PHY450H1S Problem Set 3.

Feb 10, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 12 (Taught by Prof. Erich Poppitz). Action for the field.
Action for the field.

Feb 9, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 11 (Taught by Prof. Erich Poppitz). Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.
Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.

Feb 8, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 10 (Taught by Prof. Erich Poppitz). Lorentz force equation energy term, and four vector formulation of the Lorentz force equation.
Lorentz force equation energy term, and four vector formulation of the Lorentz force equation.

Feb 6, 2011 Energy term of the Lorentz force equation.
Energy term of the Lorentz force equation.

Feb 3, 2011 PHY450H1S. Relativistic Electrodynamics Tutorial 3 (TA: Simon Freedman). Relativistic motion in constant uniform electric or magnetic fields.
Relativistic motion in constant uniform electric or magnetic fields.

Feb 1, 2011 PHY450H1S Problem Set 2.
PHY450H1S Problem Set 2.

Feb 1, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 9 (Taught by Prof. Erich Poppitz). Dynamics in a vector field.
Dynamics in a vector field.

Feb 1, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 8 (Taught by Prof. Erich Poppitz). Relativistic dynamics.
Relativistic dynamics.

Jan 27, 2011 PHY450H1S. Relativistic Electrodynamics Tutorial 2 (TA: Simon Freedman). Two worked problems.
Two worked problems.

Jan 26, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 7 (Taught by Prof. Erich Poppitz). Action and relativistic dynamics.
Action and relativistic dynamics.

Jan 25, 2011 PHY450H1S. Relativistic Electrodynamics Lecture 6 (Taught by Prof. Erich Poppitz). Four vectors and tensors.
Four vectors and tensors.

Jan 22, 2011 PHY450H1S Problem Set 1.
Problem Set 1.

Jan 20, 2011 Four vectors and a worked problem on energy flux density.
Four vectors and a worked problem on energy flux density.

Jan 18, 2011 Proper time, length contraction, time dialation, causality.
Proper time, length contraction, time dialation, causality.

Jan 18, 2011 Spacetime geometry, Lorentz transformations, Minkowski diagrams.
Spacetime geometry, Lorentz transformations, Minkowski diagrams.

Jan 14, 2011 Some tensor and geometric algebra comparisons in a spacetime context.

Jan 14, 2011 Lorentz transformation of an antisymmetric tensor.

Jan 13, 2011 Spacetime, events, worldlines, spacetime intervals, and invariance.
Spacetime, events, worldlines, spacetime intervals, and invariance.

Jan 12, 2011 Spacetime, events, worldlines, proper time, invariance.
Spacetime, events, worldlines, proper time, invariance.

Jan 11, 2011 Speed of light and simultaneity.
Speed of light and simultaneity.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »