Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for May, 2010

Infinite square well wavefunction.

Posted by peeterjoot on May 31, 2010

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

Work problem 4.1 from [1], calculation of the eigensolution for an infinite square well, with boundaries [-a/2, a/2]. It’s actually a bit tidier seeming to generalize this slightly to boundaries [a,b], which also implicitly solves the problem. This is surely a problem that is done in 700 other QM texts, but I liked the way I did it this time so am writing it down.

Guts

Our equation to solve is i \hbar \Psi_t = -(\hbar^2/2m) \Psi_{xx}. Separation of variables \Psi = T \phi gives us

\begin{aligned}T &\propto e^{-i E t/\hbar } \\ \phi'' &= -\frac{2 m E }{\hbar^2} \phi\end{aligned} \hspace{\stretch{1}}(2.1)

With k^2 = 2 m E/\hbar^2, we have

\begin{aligned}\phi = A e^{i k x } + B e^{-i k x},\end{aligned} \hspace{\stretch{1}}(2.3)

and the usual \phi(a) = \phi(b) = 0 boundary conditions give us

\begin{aligned}0 = \begin{bmatrix}e^{i k a } & e^{-i k a} \\ e^{i k b } & e^{-i k b}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.4)

We must have a zero determinant, which gives us the constraints on k immediately

\begin{aligned}0 &= e^{i k (a - b)} - e^{i k (b-a)} \\ &= 2 i \sin( k (a - b) ).\end{aligned}

So our constraint on k in terms of integers n, and the corresponding integration constant E

\begin{aligned}k &= \frac{n \pi}{b - a} \\ E &= \frac{\hbar^2 n^2 \pi^2 }{2 m (b-a)^2}.\end{aligned} \hspace{\stretch{1}}(2.5)

One of the constants A,B can be eliminated directly by picking any one of the two zeros from 2.4

\begin{aligned}&A e ^{i k a } + B e^{-i k a} = 0 \\ &\implies \\ &B = -A e ^{2 i k a } \end{aligned}

So we have

\begin{aligned}\phi = A \left( e^{i k x } - e^{ ik (2a - x) } \right).\end{aligned} \hspace{\stretch{1}}(2.7)

Or,

\begin{aligned}\phi = 2 A i e^{i k a} \sin( k (x-a )) \end{aligned} \hspace{\stretch{1}}(2.8)

Because probability densities, currents and the expectations of any operators will always have paired \phi and \phi^{*} factors, any constant phase factors like i e^{i k a} above can be dropped, or absorbed into the constant A, and we can write

\begin{aligned}\phi = 2 A \sin( k (x-a )) \end{aligned} \hspace{\stretch{1}}(2.9)

The only thing left is to fix A by integrating {\left\lvert{\phi}\right\rvert}^2, for which we have

\begin{aligned}1 &= \int_a^b \phi \phi^{*} dx \\ &= A^2 \int_a^b dx \left( e^{i k x } - e^{ ik (2a - x) } \right) \left( e^{-i k x } - e^{ -ik (2a - x) } \right) \\ &= A^2 \int_a^b dx \left( 2 - e^{ik(2a - 2x)} - e^{ik(-2a + 2x)} \right) \\ &= 2 A^2 \int_a^b dx \left( 1 - \cos (2 k (a - x)) \right)\end{aligned}

This last trig term vanishes over the integration region and we are left with

\begin{aligned}A = \frac{1}{{ \sqrt{2 (b-a)}}},\end{aligned} \hspace{\stretch{1}}(2.10)

which essentially completes the problem. A final substitution back into 2.8 allows for a final tidy up

\begin{aligned}\phi = \sqrt{\frac{2}{b-a}} \sin( k (x-a )).\end{aligned} \hspace{\stretch{1}}(2.11)

References

[1] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

On commutation of exponentials

Posted by peeterjoot on May 30, 2010

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

Previously while working
a Liboff problem, I wondered about what the conditions were required for exponentials to commute. In those problems the exponential arguments were operators. Exponentials of bivectors as in quaternion like spatial or Lorentz boosts are also good examples of (sometimes) non-commutative exponentials. It appears likely that the key requirement is that the exponential arguments commute, but how does one show this? Here this is explored a bit.

Guts

If one could show that it was true that

\begin{aligned}e^{x} e^{y} = e^{x + y}.\end{aligned} \hspace{\stretch{1}}(2.1)

Then it would also imply that

\begin{aligned}e^{x} e^{y} = e^{y} e^{x}.\end{aligned} \hspace{\stretch{1}}(2.2)

Let’s perform the school boy exercise to prove 2.1 and explore the restrictions for such a proof. We assume a power series definition of the exponential operator, and do not assume the values x,y are numeric, instead just that they can be multiplied. A commutative multiplication will not be assumed.

By virtue of the power series exponential definition we have

\begin{aligned}e^{x} e^{y} = \sum_{k=0}^\infty \frac{1}{{k!}} x^k\sum_{m=0}^\infty \frac{1}{{m!}} y^m.\end{aligned} \hspace{\stretch{1}}(2.3)

To attempt to put this into e^{x + y} form we’ll need to change the order that we evaluate the double sum, and here a picture (\ref{fig:gridSummation}) is helpful.

Diagonal grid double summation

For somebody who has seen this summation trick before the picture probably says it all. We want to iterate over all pairs (k, m), and could do so in \{(k, 0), (k, 1), \cdots (k, \infty), k \in [0, \infty] \} order as in our sum. This is all the pairs of points in the upper right hand side of the grid. We can also cover these grid coordinates in a different order. In particular, these can be iterated over the diagonals. The first diagonal having the point (0,0), the second with the points \{(0, 1), (1, 0)\}, the third with the points \{(0, 2), (1, 1), (2, 0)\}.

Observe that along each diagonal the sum of the coordinates is constant, and increases by one. Also observe that the number of points in each diagonal is this sum. These observations provide a natural way to index the new grid traversal. Labeling each of these diagonals with index j, and points on that subset with n=0,1,\cdots, j, we can express the original loop indexes k and m in terms of these new (coupled) loop indexes j and n as follows

\begin{aligned}k &= j - n \\ m &= n.\end{aligned} \hspace{\stretch{1}}(2.4)

Our sum becomes

\begin{aligned}e^{x} e^{y} = \sum_{j=0}^\infty \sum_{n=0}^j\frac{1}{{(j-n)!}} x^{j-n}\frac{1}{{n!}} y^n.\end{aligned} \hspace{\stretch{1}}(2.6)

With one small rearrangement, by introducing a j! in both the numerator and the denominator, the goal is almost reached.

\begin{aligned}e^{x} e^{y} = \sum_{j=0}^\infty \frac{1}{{j!}} \sum_{n=0}^j \frac{j!}{(j-n)! n!} x^{j-n} y^n= \sum_{j=0}^\infty \frac{1}{{j!}} \sum_{n=0}^j \binom{n}{j} x^{j-n} y^n.\end{aligned} \hspace{\stretch{1}}(2.7)

This shows where we have a requirement that x and y commute, because only in that case do we have a binomial expansion

\begin{aligned}(x + y)^j = \sum_{n=0}^j \binom{n}{j} x^{j-n} y^n,\end{aligned} \hspace{\stretch{1}}(2.8)

in the interior sum. This reduced the problem to a consideration of the implication of possible non-commutation have on the binomial expansion. Consider the simple special case of (x + y)^2. If x and y do not necessarily commute, then we have

\begin{aligned}(x + y)^2 = x^2 + x y + y x + y^2\end{aligned} \hspace{\stretch{1}}(2.9)

whereas the binomial expansion formula has no such allowance for non-commutative multiplication and just counts the number of times a product can occur in any ordering as in

\begin{aligned}(x + y)^2 = x^2 + 2 x y + y^2 = x^2 + 2 y x + y^2.\end{aligned} \hspace{\stretch{1}}(2.10)

One sees the built in requirement for commutative multiplication here. Now this doesn’t prove that e^{x} e^{y} != e^{y} e^{x} unconditionally if x and y do not commute, but we do see that a requirement for commutative multiplication is sufficient if we want equality of such commuted exponentials. In particular, the end result of the Liboff calculation where we had

\begin{aligned}e^{i \hat{f}} e^{-i \hat{f}},\end{aligned} \hspace{\stretch{1}}(2.11)

and was assuming this to be unity even for the differential operators \hat{f} under consideration is now completely answered (since we have (i \hat{f}) (-i \hat{f}) \psi = (-i \hat{f}) (i \hat{f}) \psi).

Posted in Math and Physics Learning. | Tagged: , , | 2 Comments »

Fourier transformation of the Pauli QED wave equation (Take I).

Posted by peeterjoot on May 29, 2010

[Click here for a PDF of this post with nicer formatting].

Motivation.

In [1], Feynman writes the Pauli wave equation for a non-relativistic treatment of a mass in a scalar and vector potential electrodynamic field. That is

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(1.1)

Is this amenable to Fourier transform solution like so many other PDEs? Let’s give it a try. It would also be interesting to attempt to apply such a computation to see if it is possible to calculate \left\langle{\mathbf{x}}\right\rangle, and the first two derivatives of this expectation value. I would guess that this would produce the Lorentz force equation.

Prep

Fourier Notation.

Our transform pair will be written

\begin{aligned} \Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \Psi(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x} \end{aligned} \hspace{\stretch{1}}(2.3)

Interpretation of the squared momentum operator.

Feynman actually wrote

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right]\left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right] \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(2.4)

That \sigma \cdot notation I’m not familiar with, and I’ve written this as a plain old vector square. If \mathbf{p} were not an operator, then this would be a scalar, but as written this actually also includes a bivector term proportional to \boldsymbol{\nabla} \wedge \mathbf{A} = I \mathbf{B}. To see that, lets expand this operator explicitly.

\begin{aligned}\left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \Psi &=\left( \mathbf{p}^2 - \frac{e}{c} ( \mathbf{p} \mathbf{A} + \mathbf{A} \mathbf{p} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + \frac{i e \hbar }{c} ( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ \end{aligned}

This anticommutator of the vector potential and the gradient is only a scalar \mathbf{A} has zero divergence. More generally, expanding by chain rules, and using braces to indicate the scope of the differential operations, we have

\begin{aligned}( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) \Psi&=(\boldsymbol{\nabla} \Psi) \mathbf{A} + \mathbf{A} (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I (\boldsymbol{\nabla} \times \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I \mathbf{B} \Psi - I \mathbf{A} \times (\boldsymbol{\nabla} \Psi) \\ \end{aligned}

where I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 is the spatial unit trivector, and \mathbf{B} = \boldsymbol{\nabla} \times \mathbf{A}.

This is assuming \Psi should be treated as a complex valued scalar, and not a complex-like geometric object of any sort. Does this bivector term have physical meaning? Should it be discarded or retained? If we assume discarded, then we really want to write the Pauli equation utilizing an explicit scalar selection, as in

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi + e \phi \Psi.\end{aligned} \hspace{\stretch{1}}(2.5)

Assuming that to be the case, our squared momentum operator takes the form

\begin{aligned}\left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{i e \hbar }{c}(\boldsymbol{\nabla} \cdot \mathbf{A}) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.6)

The Pauli equation, written out explicitly in terms of the gradient is then

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &= \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi+e \left( \frac{i \hbar }{ 2 m c} (\boldsymbol{\nabla} \cdot \mathbf{A}) + \phi \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.7)

Confirmation.

Instead of guessing what Feynman means when he writes Pauli’s equation, it would be better to just check what Pauli says. In [2] he uses the more straightforward notation

\begin{aligned}\frac{1}{{2m}} \sum_{k=1}^3 \left( p_k - \frac{e}{c}A_k \right)^2\end{aligned} \hspace{\stretch{1}}(2.8)

for the vector potential dependent part of the Hamiltonian operator. This is just the scalar part as was guessed.

Guts

Using the expansion 2.7 of the Pauli equation, and writing V = \phi + i \hbar (\boldsymbol{\nabla} \cdot \mathbf{A})/ (2 m c) for the effective complex potential we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 i \hbar \frac{e}{c} \mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi + e V \Psi.\end{aligned} \hspace{\stretch{1}}(3.9)

Let’s now apply each of these derivative operations to our assumed Fourier solution \Psi(\mathbf{x}, t) from 2.2. Starting with the Laplacian we have

\begin{aligned}\boldsymbol{\nabla}^2 \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i\mathbf{k})^2 e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.10)

For the \mathbf{A} \cdot \boldsymbol{\nabla} operator application we have

\begin{aligned}\mathbf{A} \cdot \boldsymbol{\nabla} \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i \mathbf{A} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)

Putting both together we have

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \frac{1}{{2m}} \left( \hbar^2 \mathbf{k}^2 - 2 \hbar \frac{e}{c} \mathbf{A} \cdot \mathbf{k} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \hat{\Psi} + e V \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)

We can tidy this up slightly by completing the square, yielding

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.13)

If this is to be zero for all (\mathbf{x}, t), it seems clear that we need \hat{\Psi}(\mathbf{k}, t) to be the solution of the first order non-linear PDE

\begin{aligned}\frac{\partial {\hat{\Psi}}}{\partial {t}}(\mathbf{k}, t) = \frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(3.14)

Somewhere along the way this got a bit confused. Our Fourier transform function is somehow a function of not just wave number, but position, since \hat{\Psi} = \hat{\Psi}(\mathbf{x}, \mathbf{k}, t) by virtue of being a solution to a differential equation involving \mathbf{A}(\mathbf{x},t), and V(\mathbf{x}, t)? Can we pretend to not to have noticed this and continue on anyways? Let’s try the further simplification of the system by imposing a constraint of constant time potentials ({\partial {\mathbf{A}}}/{\partial {t}} = {\partial {V}}/{\partial {t}} = 0). That allows for direct integration of the wave function’s Fourier transform

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) = \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A} \right)^2 + e V \right) t\right).\end{aligned} \hspace{\stretch{1}}(3.15)

And inverse transforming this

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot \mathbf{x}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.16)

By inserting the inverse Fourier transform of \hat{\Psi}(\mathbf{k}, 0), we have the time evolution of the wave function as a convolution integral

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(2 \pi)^3}} \int \Psi(\mathbf{x}', 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k} d^3 \mathbf{x}'.\end{aligned} \hspace{\stretch{1}}(3.17)

Splitting out the convolution kernel, this takes a slightly tidier form

\begin{aligned}\Psi(\mathbf{x}, t) &= \int \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}' \\ \hat{U}(\mathbf{x}, \mathbf{x}', t) &= \frac{1}{{(2 \pi)^3}} \int\exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.18)

Verification attempt.

If we apply the Pauli equation 1.1 to 3.18 does it produce the correct answer?

For the LHS we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &=\int \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(3.20)

but for the RHS we have

\begin{aligned}&\left( \frac{1}{{2m}} \left\langle{{(\mathbf{p} - \frac{e}{c}\mathbf{A})^2}}\right\rangle + e \phi \right) \Psi=\int d^3 \mathbf{x}'\hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0)  \\ &\qquad\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) +\frac{t}{2m i \hbar} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c} \mathbf{A} \cdot \boldsymbol{\nabla} \right)\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x})\right)\right) \end{aligned} \hspace{\stretch{1}}(3.21)

So if it were not for the spatial dependence of \mathbf{A} and \phi, we would have LHS equal to the RHS. It appears that ignoring the odd \mathbf{x} dependence in the \hat{\Psi} differential equation definitely leads to trouble, and only works for constant potential distributions, a rather boring special case.

References

[1] R.P. Feynman. Quantum Electrodynamics. Addison-Wesley Publishing Company. Reading, Massachusetts, 1961.

[2] W. Pauli. Wave Mechanics. Courier Dover Publications, 2000.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

errata for Feynman’s Quantum Electrodynamics (Addison-Wesley) ?

Posted by peeterjoot on May 28, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

I got a nice present today which included one of Feynman’s QED books. I noticed some early mistakes, and since I can’t find an errata page anywhere, I’ll collect them here.

Third Lecture

Page 6 typos.

The electric field is given in terms of only the scalar potential

\begin{aligned}\mathbf{E} = -\boldsymbol{\nabla} \phi + \partial \phi/ \partial t,\end{aligned}

and should be

\begin{aligned}\mathbf{E} = -\boldsymbol{\nabla} \phi - \frac{1}{{c}} \partial \mathbf{A}/ \partial t.\end{aligned}

The invariant gauge transformation for the vector and scalar potentials are given as

\begin{aligned}\mathbf{A}' &= \mathbf{A} = \boldsymbol{\nabla} \chi \\ \phi' &= \phi + \partial \chi / \partial t\end{aligned}

But these should be

\begin{aligned}\mathbf{A}' &= \mathbf{A} + \boldsymbol{\nabla} \chi \\ \phi' &= \phi - \frac{1}{{c}} \partial \chi / \partial t\end{aligned}

The sign was crossed on the scalar potential transformation. Feynman is also probably used to using c=1, but he doesn’t do that explicitly at a different point on the page, so including it here is proper.

Page 7 notes.

The units in the transformation for the wave function don’t look right. We want to transform the Pauli equation

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2 m}} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 \Psi + e \phi \Psi,\end{aligned}

with a transformation of the form

\begin{aligned}\mathbf{A}' &= \mathbf{A} + \boldsymbol{\nabla} \chi \\ \phi' &= \phi - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \\ \Psi' &= e^{-i \mu} \Psi,\end{aligned}

Where \mu \propto \chi is presumed, and we want to find the proportionality constant required for invariance. With \mathbf{p} = - i \hbar \boldsymbol{\nabla} we have

\begin{aligned}\mathbf{p} \Psi' &=-i \hbar \boldsymbol{\nabla} e^{-i \mu} \Psi \\ &=-i \hbar \left( -i (\boldsymbol{\nabla} \mu) e^{-i \mu} \Psi + e^{-i \mu} \boldsymbol{\nabla} \Psi  \right) \\ &=+ e^{-i \mu} \left( \mathbf{p} + \hbar \boldsymbol{\nabla} \mu \right) \Psi,\end{aligned}

so

\begin{aligned}(\mathbf{p} -\frac{e}{c} \mathbf{A}' )\Psi' &=e^{-i \mu} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} - \boldsymbol{\nabla} (\hbar \mu + \frac{e}{c} \chi) \right) \Psi.\end{aligned}

For the time partial we have

\begin{aligned}\frac{\partial {\Psi'}}{\partial {t}} &= e^{-i \mu} \frac{\partial {\Psi}}{\partial {t}} -i \frac{\partial {\mu}}{\partial {t}} e^{-i \mu} \Psi,\end{aligned}

and the scalar potential term transforms as

\begin{aligned}e \phi' \Psi'&=e \left( \phi - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \right) e^{-i \mu } \Psi\end{aligned}

Putting the pieces together we have

\begin{aligned}i \hbar e^{-i \mu}\left( \frac{\partial {}}{\partial {t}} -i \frac{\partial {\mu}}{\partial {t}} \right) \Psi &=\frac{1}{{2m}}\left(\mathbf{p} -\frac{e}{c} \mathbf{A} -\frac{e}{c} \boldsymbol{\nabla} \chi \right)e^{-i \mu} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} - \boldsymbol{\nabla} (\hbar \mu + \frac{e}{c} \chi) \right) \Psi + e \left( \phi - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \right) e^{-i \mu } \Psi\end{aligned}

We need one more intermediate result, that of

\begin{aligned}\mathbf{p} e^{-i \mu } \mathbf{D}&= - i \hbar e^{-i \mu} \left( -i (\boldsymbol{\nabla} \mu) + \boldsymbol{\nabla} \right) \mathbf{D} \\ &= e^{-i\mu} (\mathbf{p} - \hbar \boldsymbol{\nabla} \mu) \mathbf{D}.\end{aligned}

So we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}}+\hbar \frac{\partial {\mu}}{\partial {t}} \Psi &=\frac{1}{{2m}}\left(\mathbf{p} - \hbar \boldsymbol{\nabla} \mu -\frac{e}{c} \mathbf{A} -\frac{e}{c} \boldsymbol{\nabla} \chi \right)\left( \mathbf{p} - \frac{e}{c} \mathbf{A} - \boldsymbol{\nabla} (\hbar \mu + \frac{e}{c} \chi) \right) \Psi + e \left( \phi - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \right) \Psi.\end{aligned}

To get rid of the \mu, and \chi time partials we need

\begin{aligned}\hbar \frac{\partial {\mu}}{\partial {t}} = - \frac{e}{c} \frac{\partial {\chi}}{\partial {t}}\end{aligned}

Or

\begin{aligned}\mu = - \frac{e}{c\hbar} \chi\end{aligned}

This also kills off all the additional undesirable terms in the transformed \mathbf{P}^2 operator (with \mathbf{P} = \mathbf{p} - e \mathbf{A}/c), leaving the invariant transformation completely specified

\begin{aligned}\mathbf{A}' &= \mathbf{A} + \boldsymbol{\nabla} \chi \\ \phi' &= \phi - \frac{1}{{c}} \frac{\partial { \chi }}{\partial {t}} \\ \Psi' &= \exp\left( i \frac{e}{ \hbar c } \chi \right) \Psi,\end{aligned}

This is a fair bit different than Feynman’s result, but since he starts with the wrong electrodynamic guage transformation, that’s not too unexpected.

Second Lecture

This isn’t errata, but I found the following required slight exploration. He gives (implicitly)

\begin{aligned}\overline{\sin^2(\omega t - \mathbf{K} \cdot \mathbf{x})} = \frac{1}{{2}}\end{aligned}

Is this an average over space and time? How would one do that? What do we get just integrating this over the volume? That dot product is \mathbf{K} \cdot \mathbf{x} = 2 \pi \left(\frac{m}{\lambda_1} x + \frac{n}{\lambda_2} y + \frac{o}{\lambda_3} z \right). Our average over the volume, for m \ne 0, using wolfram alpha to do the dirty work

\begin{aligned}&\frac{1}{{\lambda_1 \lambda_2 \lambda_3}} \int_{z=0}^{\lambda_3} dz\int_{y=0}^{\lambda_2} dy\int_{x=0}^{\lambda_1}dx \sin^2 \left( -\frac{2 \pi m x}{\lambda_1} -\frac{2 \pi n y}{\lambda_2} -\frac{2 \pi o z}{\lambda_3} + \omega t \right) \\ &=\frac{1}{{\lambda_1 \lambda_2 \lambda_3}} \int_{z=0}^{\lambda_3} dz\int_{y=0}^{\lambda_2} dy{\left.\frac{-\lambda_1}{4 \pi m} \left( -\frac{2 \pi m }{\lambda_1} x -\frac{2 \pi n y}{\lambda_2} -\frac{2 \pi o z}{\lambda_3} + \omega t \right)\right\vert}_{x=0}^{\lambda_1} \\ &-\frac{1}{{\lambda_1 \lambda_2 \lambda_3}} \int_{z=0}^{\lambda_3} dz\int_{y=0}^{\lambda_2} dy{\left.\frac{-\lambda_1}{8 \pi m} \sin \left( 2 \left(-\frac{2 \pi m }{\lambda_1} x -\frac{2 \pi n y}{\lambda_2} -\frac{2 \pi o z}{\lambda_3} + \omega t \right) \right)\right\vert}_{x=0}^{\lambda_1}\end{aligned}

Since the sine integral vanishes, we have just 1/2 as expected regardless of the angular frequency \omega. Okay, that makes sense now. Looks like \omega is only relavent for the single \mathbf{K} = 0 Fourier component, but that likely doesn’t matter since I seem to recall that the \mathbf{K} = 0 fourier component of this oscillators in a box problem was entirely constant (and perhaps zero?).

Posted in Math and Physics Learning. | Tagged: | Leave a Comment »

An in-place c++filt ?

Posted by peeterjoot on May 26, 2010

A filter script like c++filt can be a bit irritating sometimes. Imagine that you want to run somelike like the following

$ c++filt < v > v

The effect of this is to completely clobber the input file, and not alter it in place. You may think that something like the following may work, so that the read is done first by the cat program:

$ cat v | c++filt > v

but this also doesn’t work, and one is also left with a zero sized output file, and not the filtered output. I’ve run stuff like the following a number of times:

$ for i in *some list of files* ; do c++filt < $i > $i.tmp$$ ; mv $i.tmp$$ $i ; done

and have often wondered if there’s an easier way. One way would be to put something like this in a script and avoid re-creating a command line like this every time. I tried this in perl, making a stdin/stdout filter by default, and a file modifying helper when files are listed specifically (not really a filter anymore, but often how I’d like to be able to invoke c++filt). Here’s that beastie:

#!/usr/bin/perl

use warnings ;
use strict ;

# slurp whole file into a single variable
undef( $/ ) ; #slurp mode

if ( scalar(@ARGV) )
{
   foreach (@ARGV)
   {
      my $cmd = "cat $_ | c++filt |" ;

      open( my $fhIn, $cmd ) or die "pipe open '$cmd' failed\n" ;

      my $file_contents = ( <$fhIn> ) ;

      close $fhIn or die "read or pipe close of '$cmd' failed\n" ;


      open( my $fhOut, ">$_") or die "open of '$_' for write failed\n" ;

      print $fhOut $file_contents ;

      close $fhOut or die "close or write to '$_' failed\n" ;
   }
}
else
{
   my $file_contents = ( <> ) ;

   print $file_contents ;
}

This also works, but is clunkier than I expected. If anybody knows of some way to use or abuse the in place filtering capability of perl (ie: perl -p -i) to do something like this, or some other clever way to do this, I’d be curious what it is?

Posted in C/C++ development and debugging. | Tagged: , , | 10 Comments »

A fun and curious dig. GCC generation of a ud2a instruction (SIGILL)

Posted by peeterjoot on May 26, 2010

Recently some of our code started misbehaving only when compiled with the GCC compiler. Our post mortem stacktrace and data collection tools didn’t deal with this trap very gracefully, and dealing with that (or even understanding it) is a different story.

What I see in the debugger once I find the guilty thread is:

(gdb) thread 12
[Switching to thread 12 (Thread 46970517317952 (LWP 30316))]#0  0x00002ab824438ec1 in __gxx_personality_v0 ()
    at ../../../../gcc-4.2.2/libstdc++-v3/libsupc++/eh_personality.cc:351
351     ../../../../gcc-4.2.2/libstdc++-v3/libsupc++/eh_personality.cc: No such file or directory.
        in ../../../../gcc-4.2.2/libstdc++-v3/libsupc++/eh_personality.cc
(gdb) where
#0  0x00002ab824438ec1 in __gxx_personality_v0 ()
    at ../../../../gcc-4.2.2/libstdc++-v3/libsupc++/eh_personality.cc:351
#1  0x00002ab824438cc9 in sleep () from /lib64/libc.so.6
#2  0x00002ab8203090ee in sqloEDUSleepHandler (signum=20, sigcode=0x2ab82cffa0c0, scp=0x2ab82cff9f90)
    at sqloinst.C:283
#3  
#4  0x00002ab81cf03231 in __gxx_personality_v0 ()
    at ../../../../gcc-4.2.2/libstdc++-v3/libsupc++/eh_personality.cc:351
#5  0x00002ab823b9b745 in ossSleep () from /home/hotel74/peeterj/sqllib/lib64/libdb2osse.so.1
#6  0x00002ab821206992 in pdInvokeCalloutScript () at /view/peeterj_m19/vbs/engn/include/sqluDMSort_inlines.h:158
#7  0x00002ab82030fe99 in sqloEDUCodeTrapHandler (signum=4, sigcode=0x2ab82cffcc60, scp=0x2ab82cffcb30)
    at sqloedu.C:4476
#8  
#9  0x00002ab821393257 in sqluInitLoadEDU (pPrivateACBIn=0x2059e0080, ppPrivateACBOut=0x2ab82cffd320,
    puchAuthID=0x2ab8fcef19b8 "PEETERJ ", pNLSACB=0x2ab8fceea168, pComCB=0x2ab8fceea080, pMemPool=0x2ab8fccca2d0)
    at sqluedus.C:1696
#10 0x00002ab8212d34c2 in sqluldat (pArgs=0x2ab82cffdef0 "", argsSize=96) at sqluldat.C:737
#11 0x00002ab820310ced in sqloEDUEntry (parms=0x2ab82f3e9680) at sqloedu.C:3438
#12 0x00002ab81cefc143 in start_thread () from /lib64/libpthread.so.0
#13 0x00002ab82446674d in clone () from /lib64/libc.so.6
#14 0x0000000000000000 in ?? ()

Observe that there are two sets of ” frames. One from the original SIGILL, and another one that our “main” thread ends up sending to all the rest of the threads as part of our process for freezing things to be able to take a peek and see what’s up.

Looking at the siginfo_t for the SIGILL handler we have:

(gdb) frame 7
#7  0x00002ab82030fe99 in sqloEDUCodeTrapHandler (signum=4, sigcode=0x2ab82cffcc60, scp=0x2ab82cffcb30)
    at sqloedu.C:4476
4476    sqloedu.C: No such file or directory.
        in sqloedu.C
(gdb) p *sigcode
$4 = {si_signo = 4, si_errno = 0, si_code = 2, _sifields = {_pad = {557396567, 10936, 0, 0, 1, 16777216,
      -1170923664, 10936, 754961616, 10936, 599153081, 10936, 0, 0, 15711488, 10752, 4, 0, -1170923664, 10936, 1, 0,
      0, 0, 754961680, 10936, 4292335, 0}, _kill = {si_pid = 557396567, si_uid = 10936}, _timer = {
      si_tid = 557396567, si_overrun = 10936, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {
      si_pid = 557396567, si_uid = 10936, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {
      si_pid = 557396567, si_uid = 10936, si_status = 0, si_utime = 72057594037927937, si_stime = 46972886392688},
    _sigfault = {si_addr = 0x2ab821393257}, _sigpoll = {si_band = 46970319745623, si_fd = 0}}}
(gdb) p /x *sigcode
$5 = {si_signo = 0x4, si_errno = 0x0, si_code = 0x2, _sifields = {_pad = {0x21393257, 0x2ab8, 0x0, 0x0, 0x1,
      0x1000000, 0xba351f70, 0x2ab8, 0x2cffccd0, 0x2ab8, 0x23b659b9, 0x2ab8, 0x0, 0x0, 0xefbd00, 0x2a00, 0x4, 0x0,
      0xba351f70, 0x2ab8, 0x1, 0x0, 0x0, 0x0, 0x2cffcd10, 0x2ab8, 0x417eef, 0x0}, _kill = {si_pid = 0x21393257,
      si_uid = 0x2ab8}, _timer = {si_tid = 0x21393257, si_overrun = 0x2ab8, si_sigval = {sival_int = 0x0,
        sival_ptr = 0x0}}, _rt = {si_pid = 0x21393257, si_uid = 0x2ab8, si_sigval = {sival_int = 0x0,
        sival_ptr = 0x0}}, _sigchld = {si_pid = 0x21393257, si_uid = 0x2ab8, si_status = 0x0,
      si_utime = 0x100000000000001, si_stime = 0x2ab8ba351f70}, _sigfault = {si_addr = 0x2ab821393257}, _sigpoll = {
      si_band = 0x2ab821393257, si_fd = 0x0}}}

This has got the si_addr value 0x00002AB821393257, which also matches frame 9 in the stack for sqluInitLoadEDU. What was at that line of code, doesn’t appear to be something that ought to generate a SIGILL:

   1693    // Set current activity in private agent CB to
   1694    // point to the activity that the EDU is working
   1695    // on behalf of.
   1696    pPrivateACB->agtRqstCB.pActivityCB = pComCB->my_curr_activity_entry;
   1697 #ifdef DB2_DEBUG
   1698    { //!!  This debug code is only useful in conjunction with a trap described by W749645
   1699       char mesg[500];
   1700       sprintf(mesg,"W749645:uILE pPr->agtR=%p ->pAct=%p",pPrivateACB->agtRqstCB,pPrivateACB->agtRqstCB.pActivi        tyCB);
   1701       sqlt_logerr_str(SQLT_SQLU, SQLT_sqluInitLoadEDU, __LINE__, mesg, NULL, 0, SQLT_FFSL_INF);
   1702    } //!!
   1703 #endif

So what is going on? Let’s look at the assembly for the trapping instruction address. Using ‘(gdb) set logging on’, and ‘(gdb) disassemble’ we find:

0x00002ab82139323c 
0x00002ab82139323e : mov    0xfffffffffffffd68(%rbp),%rax
0x00002ab821393245 : mov    0x6498(%rax),%rdx
0x00002ab82139324c : mov    0xffffffffffffffb0(%rbp),%rax
0x00002ab821393250 : mov    %rdx,0x5bd0(%rax)
0x00002ab821393257 : ud2a
^^^^^^^^^^^^^^^^^^
0x00002ab821393259 : cmpl   $0x0,0xffffffffffffffac(%rbp)
0x00002ab82139325d 
0x00002ab82139325f : mov    0xfffffffffffffd80(%rbp),%rdi
0x00002ab821393266 : callq  0x2ab81dcd4218 
0x00002ab82139326b : mov    0xffffffffffffffd8(%rbp),%rax
0x00002ab82139326f : and    $0x82,%eax
0x00002ab821393274 : test   %rax,%rax

Hmm. What is a ud2a instruction? Google is our friend and we find that the linux kernel uses this as a “guaranteed invalid instruction”. It is used to fault the processor and halt the kernel in case you did something really really bad.

Other similar references can be found, also explaining the use in the linux kernel. So what is this doing in userspace code? It seems like something too specific to get there by accident and since the instruction stream itself contains this stack corruption or any other sneaky nasty mechanism doesn’t seem likely. The instruction doesn’t immediately follow a callq, so a runtime loader malfunction or something else equally odd doesn’t seem likely.

Perhaps the compiler put this instruction into the code for some reason. A compiler bug perhaps? A new google search for GCC ud2a instruction finds me

   ...generates this warning (using gcc 4.4.1 but I think it applies to most
   gcc versions):

   main.cpp:12: warning: cannot pass objects of non-POD type .class A.
   through .....; call will abort at runtime

   1. Why is this a "warning" rather than an "error"? When I run the program
   it hits a "ud2a" instruction emitted by gcc and promptly hits SIGILL.

Oh my! It sounds like GCC has cowardly refused to generate an error, but also bravely refuses to generate bad code for whatever this code sequence is. Do I have such an error in my build log? In fact, I have three, all of which look like:

sqluedus.C:1464: warning: deprecated conversion from string constant to 'char*'
sqluedus.C:1700: warning: cannot pass objects of non-POD type 'struct sqlrw_request_cb' through '...'; call will abort at runtime

At 1700 of that file we have:

sprintf(mesg,"W749645:uILE pPr->agtR=%p ->pAct=%p",pPrivateACB->agtRqstCB,pPrivateACB->agtRqstCB.pActivityCB);

It turns out that agtRqstCB is a rather large structure, and certainly doesn’t match the %p that the developer used in this debug build special code. The debug code actually makes things worse, and certainly won’t help on any platform. It probably also won’t crash on any platform either (except when using the GCC compiler) since there are no subsequent %s format parameters that will get messed up by placing gob-loads of structure data in the varargs data area inappropriately.

This should resolve this issue and allow me to go back to avoiding the (much slower!) intel compiler that is used by our nightly build process.

Posted in C/C++ development and debugging. | Tagged: , , , , , , | 15 Comments »

Effect of sinusoid operators

Posted by peeterjoot on May 23, 2010

[Click here for a PDF of this post with nicer formatting]

Problem 3.19.

[1] problem 3.19 is

What is the effect of operating on an arbitrary function f(x) with the following two operators

\begin{aligned}\hat{O}_1 &\equiv \partial^2/\partial x^2 - 1 + sin^2 (\partial^3/\partial x^3)+ cos^2 (\partial^3/\partial x^3) \\ \hat{O}_2 &\equiv + cos (2 \partial/\partial x) + sin^2 (\partial/\partial x)+ \int_a^b dx\end{aligned} \hspace{\stretch{1}}(1.1)

On the surface with \sin^2 y + \cos^2 y = 1 and \cos 2y + 2 \sin^2 y = 1 it appears that we have just

\begin{aligned}\hat{O}_1 &\equiv \partial^2/\partial x^2  \\ \hat{O}_2 &\equiv 1 + \int_a^b dx\end{aligned} \hspace{\stretch{1}}(1.3)

but it this justified when the sinusoids are functions of operators? Let’s look at the first case. For some operator \hat{f} we have

\begin{aligned}\sin^2 \hat{f} + \cos^2 \hat{f}&=-\frac{1}{{4}} \left( e^{i\hat{f}} -e^{-i\hat{f}}\right)\left( e^{i\hat{f}} -e^{-i\hat{f}}\right)+\frac{1}{{4}} \left( e^{i\hat{f}} +e^{-i\hat{f}}\right)\left( e^{i\hat{f}} +e^{-i\hat{f}}\right) \\ &=\frac{1}{{2}} \left(e^{i\hat{f}} e^{-i\hat{f}} +e^{-i\hat{f}} e^{i\hat{f}}\right)\end{aligned}

Can we assume that these cancel for general operators? How about for our specific differential operator \hat{f} = \partial^3/\partial x^3? For that one we have

\begin{aligned}e^{i \partial^3/\partial x^3} e^{-i \partial^3/\partial x^3} g(x)&=\sum_{k=0}^\infty \frac{1}{{k!}} \left(\frac{\partial^3}{\partial x^3}\right)^k\sum_{m=0}^\infty \frac{1}{{m!}} \left(\frac{\partial^3}{\partial x^3}\right)^m g(x)\end{aligned}

Since the differentials commute, so do the exponentials and we can write the slightly simpler

\begin{aligned}\sin^2 \hat{f} + \cos^2 \hat{f} = e^{i\hat{f}} e^{-i\hat{f}} \end{aligned}

I’m pretty sure the commutative property of this differential operator would also allow us to say (in this case at least)

\begin{aligned}\sin^2 \hat{f} + \cos^2 \hat{f} = 1\end{aligned}

Will have to look up the combinatoric argument that allows one to write, for numbers,

\begin{aligned}e^x e^y = \sum_{k=0}^\infty \frac{1}{{k!}} x^k \sum_{m=0}^\infty \frac{1}{{m!}} y^m =\sum_{j=0}^\infty \frac{1}{{j!}} (x+y)^j = e^{x+y}\end{aligned}

If this only assumes that x and y commute, and not any other numeric properties then we have the supposed result 1.3. We also know of algebraic objects where this does not hold. One example is exponentials of non-commuting square matrices, and other is non-commuting bivector exponentials.

References

[1] R. Liboff. Introductory quantum mechanics. 2003.

Posted in Math and Physics Learning. | 1 Comment »

Time evolution of some wavefunctions (incomplete)

Posted by peeterjoot on May 23, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

In [1] is problem 3.14, Describe the time evolution of the following wavefunctions

\begin{aligned}\psi_1 &= A \sin \omega t \cos k (x + c t) \\ \psi_2 &= A \sin 10^{-5} k x \cos k (x - c t) \\ \psi_3 &= A \cos k ( x - c t ) \sin 10^{-5} k (x - c t)\end{aligned}

This isn’t really a QM problem, but seems worthwhile anyways, because it isn’t obvious looking at the functions what this is.

\psi_3

Let’s start in reverse order with \psi_3, but in a slightly more general form that is less error prone to manipulate. These wavefunctions can be viewed as superpositions, and expanding out as exponentials temporarily gets us to a form that makes this more obvious.

\begin{aligned}\psi &= A \sin k_1 ( x + v_1 t) \cos k_2 ( x + v_2 t) \\ &= \frac{A}{4i} \left( e^{ i k_1 ( x + v_1 t)} - e^{ -i k_1 ( x + v_1 t)} \right) \left( e^{ i k_2 ( x + v_2 t)} + e^{ -i k_2 ( x + v_2 t)} \right) \\ &= \frac{A}{2} \left( \sin ((k_1 + k_2) x + (k_1 v_1 + k_2 v_2 ) t) + \sin ((k_1 - k_2) x + (k_1 v_1 - k_2 v_2 ) t) \right) \\ &= \frac{A}{2} \left( \sin \left( (k_1 + k_2) \left(x + \frac{k_1 v_1 + k_2 v_2 }{k_1 + k_2} t\right) \right)+\sin \left( (k_1 - k_2) \left(x + \frac{k_1 v_1 - k_2 v_2 }{k_1 - k_2} t\right) \right)\right)\end{aligned}

Now the problem is simplified to observing how a wave of the form \phi = \sin \kappa (x + v t) propagates, or really the interaction of two such waves moving together or against each other, depending on the signs of the constants. Let’s now put in the constants for \psi_3 to get a better feel for it

\begin{aligned}\psi_3&= \frac{A}{2} \left( \sin \left( 1.00001 k \left(x - c t\right) \right)-\sin \left( 0.99999 k \left(x - c t\right) \right)\right)\end{aligned}

It wasn’t obvious from the original product of sinusoids form that the question asked about that the resulting wave form stays in phase for its time propagation, but we see that to be the case above. This really just leaves some thought about the standing wave itself to understand what is happening. For that, at time 0, we have a destructive interference superposition of two almost identical period standing waves. That near perfect cancellation will likely leave an envelope, and a Mathematica plot gives a better feel for this waveform. This in turn will propagate at light speed down the x-axis. Because k_1 is so small we have a nearly linear, and nearly flat, envelope for the \cos k x, as can be expected near the origin since we have there

\begin{aligned}\psi_3(x, 0) \approx A 10^{-5} k x \cos k x\end{aligned}

Comparing to a smaller range plot, one sees that it is necessary to increase the plot range significantly before seeing the oscillatory nature of the envelope.

\psi_2

For \psi_2 we have almost the same wave function, but out sine term has no time variation. What does this do to the waveform? Let’s see if a sum and difference of angles form sheds some light on that. We have

\begin{aligned}\psi &= A \sin k_1 x \cos k_2 ( x + v_2 t) \\ &= \frac{A}{4i} \left( e^{ i k_1 x } - e^{ -i k_1 x } \right) \left( e^{ i k_2 ( x + v_2 t)} + e^{ -i k_2 ( x + v_2 t)} \right) \\ &= \frac{A}{2} \left( \sin ((k_1 + k_2) x + k_2 v_2 t) + \sin ((k_1 - k_2) x - k_2 v_2 t) \right) \\ &= \frac{A}{2} \left( \sin \left( (k_1 + k_2) \left(x + \frac{k_2 v_2 }{k_1 + k_2} t\right) \right)+\sin \left( (k_1 - k_2) \left(x - \frac{k_2 v_2 }{k_1 - k_2} t\right) \right)\right)\end{aligned}

Specifically for k_1 = 10^{-5} k, and k_2 = k, we have

\begin{aligned}\psi_2&= \frac{A}{2} \left( \sin \left( 1.00001 k \left(x - \frac{1}{1.00001} c t\right) \right)-\sin \left( 0.99999 k \left(x + \frac{1}{0.99999 } c t\right) \right)\right)\end{aligned}

Again at t=0 we have a very widely spread envelope with rapid oscillations within it. There is a very small difference in the rate that the two components waveforms will go out of phase, and each of the component waveforms is moving in the opposite directions. What does that phase change do to the evolution? Looking back to the original product of sinusoids form for \psi_2 I believe this will just mean we have the phase shifting with time within the envelope.

\psi_1

Finally for the first wave function we have both of the sinusoid factors with time variation. What does that expand out to in terms of superposition of fundamental frequencies?

\begin{aligned}\psi_1&= A \sin \omega t \cos k ( x + c t) \\ &= \frac{A}{4i} \left( e^{ i \omega t} - e^{ -i \omega t } \right) \left( e^{ i k ( x + c t)} + e^{ -i k ( x + c t)} \right) \\ &= \frac{A}{2} \left( \sin ( k (x + c t) + \omega t )-\sin ( k (x + c t) - \omega t )\right),\end{aligned}

or

\begin{aligned}\psi_1&= \frac{A}{2} \left( \sin ( k x + ( k c + \omega ) t )-\sin ( k x + ( k c - \omega) t )\right)\end{aligned}

We have the superposition of two \sin k x wave forms, destructively interfering with each other, one with phase changing at the angular rate k c + \omega, and the other at the rate k c - \omega. What is the overall waveform? It’s still not obvious what this is. I actually have the inclination to try to not treat these analytically, but pull out some graphing software. Something like the real mathematica software would be nice since it would allow for the use of sliders to vary parameters and then animate the graphs as time varied.

References

[1] R. Liboff. Introductory quantum mechanics. 2003.

Posted in Math and Physics Learning. | Leave a Comment »

Center of mass for a circular wire segment

Posted by peeterjoot on May 16, 2010

[Click here for a PDF of this post with nicer formatting]

As a check for the torus segment center of mass calculation, there should be agreement in the limit where the radius of the torus goes to zero (with a non-zero correction otherwise).

Center of mass for a circular wire segment.

As an additional check for the correctness of the result above, we should be able to compare with the center of mass of a circular wire segment, and get the same result in the limit r \rightarrow 0.

For that we have

\begin{aligned}Z (R \Delta \theta) = \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} R i e^{-i\theta} R d\theta\end{aligned} \hspace{\stretch{1}}(4.21)

So we have

\begin{aligned}Z &= \frac{1}{{\Delta \theta}} R i \frac{1}{{-i}} (e^{-i\Delta \theta/2} - e^{i\Delta\theta/2}).\end{aligned} \hspace{\stretch{1}}(4.22)

Observe that this is

\begin{aligned}Z &= R i \frac{\sin(\Delta\theta/2)}{\Delta\theta/2},\end{aligned} \hspace{\stretch{1}}(4.23)

which is consistent with the previous calculation for the solid torus when we let that solid diameter shrink to zero.

In particular, for 3/4 of the torus, we have \Delta \theta = 2 \pi (3/4) = 3 \pi/2, and

\begin{aligned}Z = R i \frac{4 \sin(3\pi/4)}{3 \pi} = R i \frac{2 \sqrt{2}}{3 \pi} \approx 0.3 R i.\end{aligned} \hspace{\stretch{1}}(4.24)

We are a little bit up the imaginary axis as expected.

I’d initially somehow thought I’d been off by a factor of two compared to the result by The Virtuosi, without seeing a mistake in either. But that now appears not to be the case, and I just screwed up plugging in the numbers. Once again, I should go to my eight year old son when I have arithmetic problems, and restrict myself to just the calculus and algebra bits.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

Center of mass for a toroidal segment

Posted by peeterjoot on May 15, 2010

Motivation.

[Click here for a PDF of this post with nicer formatting]Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.

After seeing Iron Man II with Lance earlier, a movie with bountiful toroids, and now that the kids are tucked in, finishing up the toroidal center of mass calculation started earlier seems like it is in order. This is a problem I’d been meaning to try since reading this blog post for the center of mass of a toroidal wire segment

Center of mass.

With the prep done, we are ready to move on to the original problem. Given a toroidal segment over angle \theta \in [-\Delta \theta/2, \Delta \theta/2], then the volume of that segment is

\begin{aligned}\Delta V = r^2 R \pi \Delta \theta.\end{aligned} \hspace{\stretch{1}}(3.13)

Our center of mass position vector is then located at

\begin{aligned}\mathbf{R} \Delta V &= \int_{\rho=0}^r \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} \int_{\phi=0}^{2\pi} e^{-j\theta/2} \left( \rho \mathbf{e}_1 e^{ i \phi } + R \mathbf{e}_3 \right) e^{j \theta/2} \rho \left( R + \rho \sin\phi \right) d\rho d\theta d\phi.\end{aligned} \hspace{\stretch{1}}(3.14)

Evaluating the \phi integrals we loose the \int_0^{2\pi} e^{i\phi} and \int_0^{2\pi} \sin\phi terms and are left with \int_0^{2\pi} e^{i\phi} \sin\phi d\phi = i \pi /2 and \int_0^{2\pi} d\phi = 2 \pi. This leaves us with

\begin{aligned}\mathbf{R} \Delta V &= \int_{\rho=0}^r \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} \left( e^{-j\theta/2} \rho^3 \mathbf{e}_3 \frac{\pi}{2} e^{j \theta/2} + 2 \pi \rho R^2 \mathbf{e}_3 e^{j \theta}  \right) d\rho d\theta \\ &= \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} \left( e^{-j\theta/2} r^4 \mathbf{e}_3 \frac{\pi}{8} e^{j \theta/2} + 2\pi \frac{1}{{2}} r^2 R^2 \mathbf{e}_3 e^{j \theta}  \right) d\theta \\ &= \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} \left( e^{-j\theta/2} r^4 \mathbf{e}_3 \frac{\pi}{8} e^{j \theta/2} + \pi r^2 R^2 \mathbf{e}_3 e^{j \theta}  \right) d\theta.\end{aligned} \hspace{\stretch{1}}(3.15)

Since \mathbf{e}_3 j = -j \mathbf{e}_3, we have a conjugate commutation with the e^{-j \theta/2} for just

\begin{aligned}\mathbf{R} \Delta V &= \pi r^2 \left( \frac{r^2}{8} + R^2 \right) \mathbf{e}_3 \int_{\theta=-\Delta \theta/2}^{\Delta \theta/2} e^{j \theta} d\theta \\ &= \pi r^2 \left( \frac{r^2}{8} + R^2 \right) \mathbf{e}_3 2 \sin(\Delta \theta/2).\end{aligned} \hspace{\stretch{1}}(3.18)

A final reassembly, provides the desired final result for the center of mass vector

\begin{aligned}\mathbf{R} &= \mathbf{e}_3 \frac{1}{{R}} \left( \frac{r^2}{8} + R^2 \right) \frac{\sin(\Delta \theta/2)}{ \Delta \theta/2 }.\end{aligned} \hspace{\stretch{1}}(3.20)

Presuming no algebraic errors have been made, how about a couple of sanity checks to see if the correctness of this seems plausible.

We are pointing in the z-axis direction as expected by symmetry. Good. For \Delta \theta = 2 \pi, our center of mass vector is at the origin. Good, that’s also what we expected. If we let r \rightarrow 0, and \Delta \theta \rightarrow 0, we have \mathbf{R} = R \mathbf{e}_3 as also expected for a tiny segment of “wire” at that position. Also good.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »