• 360,182

# Posts Tagged ‘wave packet’

## PHY456H1F: Quantum Mechanics II. Lecture 8 (Taught by Prof J.E. Sipe). Time dependent pertubation (cont.)

Posted by peeterjoot on October 8, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Time dependent pertubation.

We’d gotten as far as calculating

\begin{aligned}c_m^{(1)}(\infty) = \frac{1}{{i \hbar}} \boldsymbol{\mu}_{ms} \cdot \mathbf{E}(\omega_{ms})\end{aligned} \hspace{\stretch{1}}(2.1)

where

\begin{aligned}\mathbf{E}(t) = \int \frac{d\omega}{2 \pi} \mathbf{E}(\omega) e^{-i \omega t},\end{aligned} \hspace{\stretch{1}}(2.2)

and

\begin{aligned}\omega_{ms} = \frac{E_m - E_s}{\hbar}.\end{aligned} \hspace{\stretch{1}}(2.3)

Graphically, these frequencies are illustrated in figure (\ref{fig:qmTwoL8fig0FrequenciesAbsorbtionAndEmission})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL8fig0FrequenciesAbsorbtionAndEmission}
\caption{Positive and negative frequencies.}
\end{figure}

The probability for a transition from $m$ to $s$ is therefore

\begin{aligned}\rho_{m \rightarrow s} = {\left\lvert{ c_m^{(1)}(\infty) }\right\rvert}^2= \frac{1}{{\hbar}}^2 {\left\lvert{\boldsymbol{\mu}_{ms} \cdot \mathbf{E}(\omega_{ms})}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.4)

Recall that because the electric field is real we had

\begin{aligned}{\left\lvert{\mathbf{E}(\omega)}\right\rvert}^2 = {\left\lvert{\mathbf{E}(-\omega)}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(2.5)

Suppose that we have a wave pulse, where our field magnitude is perhaps of the form

\begin{aligned}E(t) = e^{-t^2/T^2} \cos(\omega_0 t),\end{aligned} \hspace{\stretch{1}}(2.6)

as illustated with $\omega = 10, T = 1$ in figure (\ref{fig:gaussianWavePacket}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{gaussianWavePacket}
\caption{Gaussian wave packet}
\end{figure}

We expect this to have a two lobe Fourier spectrum, with the lobes centered at $\omega = \pm 10$, and width proportional to $1/T$.

For reference, as calculated using Mathematica this Fourier transform is

\begin{aligned}E(\omega) = \frac{e^{-\frac{1}{4} T^2 (\omega_0+\omega )^2}}{2 \sqrt{\frac{2}{T^2}}}+\frac{e^{\omega_0 T^2 \omega -\frac{1}{4} T^2 (\omega_0+\omega )^2}}{2 \sqrt{\frac{2}{T^2}}}\end{aligned} \hspace{\stretch{1}}(2.7)

This is illustrated, again for $\omega_0 = 10, and T=1$, in figure (\ref{fig:FTgaussianWavePacket})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{FTgaussianWavePacket}
\caption{FTgaussianWavePacket}
\end{figure}

where we see the expected Gaussian result, since the Fourier transform of a Gaussian is a Gaussian.

FIXME: not sure what the point of this was?

# Sudden pertubations.

Given our wave equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.8)

and a sudden pertubation in the Hamiltonian, as illustrated in figure (\ref{fig:suddenStepHamiltonian})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{suddenStepHamiltonian}
\caption{Sudden step Hamiltonian.}
\end{figure}

Consider $H_0$ and $H_F$ fixed, and decrease $\Delta t \rightarrow 0$. We can formally integrate 3.8

\begin{aligned}\frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = \frac{1}{{i \hbar}} H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.9)

For

\begin{aligned}{\lvert {\psi(t)} \rangle} -{\lvert {\psi(t_0)} \rangle} = \frac{1}{{i \hbar}} \int_{t_0}^t H(t') {\lvert {\psi(t')} \rangle} dt'.\end{aligned} \hspace{\stretch{1}}(3.10)

While this is an exact solution, it is also not terribly useful since we don’t know ${\lvert {\psi(t)} \rangle}$. However, we can select the small interval $\Delta t$, and write

\begin{aligned}{\lvert {\psi(\Delta t/2)} \rangle} ={\lvert {\psi(-\Delta t/2)} \rangle}+ \frac{1}{{i \hbar}} \int_{t_0}^t H(t') {\lvert {\psi(t')} \rangle} dt'.\end{aligned} \hspace{\stretch{1}}(3.11)

Note that we could use the integral kernel iteration technique here and substitute ${\lvert {\psi(t')} \rangle} = {\lvert {\psi(-\Delta t/2)} \rangle}$ and then develop this, to generate a power series with $(\Delta t/2)^k$ dependence. However, we note that 3.11 is still an exact relation, and if $\Delta t \rightarrow 0$, with the integration limits narrowing (provided $H(t')$ is well behaved) we are left with just

\begin{aligned}{\lvert {\psi(\Delta t/2)} \rangle} = {\lvert {\psi(-\Delta t/2)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.12)

Or

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle} = {\lvert {\psi_{\text{before}}} \rangle},\end{aligned} \hspace{\stretch{1}}(3.13)

provided that we change the Hamiltonian fast enough. On the surface there appears to be no consequences, but there are some very serious ones!

## Example: Harmonic oscillator.

Consider our harmonic oscillator Hamiltonian, with

\begin{aligned}H_0 &= \frac{P^2}{2m} + \frac{1}{{2}} m \omega_0^2 X^2 \\ H_F &= \frac{P^2}{2m} + \frac{1}{{2}} m \omega_F^2 X^2\end{aligned} \hspace{\stretch{1}}(3.14)

Here $\omega_0 \rightarrow \omega_F$ continuously, but very quickly. In effect, we have tightened the spring constant. Note that there are cases in linear optics when you can actually do exactly that.

Imagine that ${\lvert {\psi_{\text{before}}} \rangle}$ is in the ground state of the harmonic oscillator as in figure (\ref{fig:suddenHamiltonianPertubationHO})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{suddenHamiltonianPertubationHO}
\caption{Harmonic oscillator sudden Hamiltonian pertubation.}
\end{figure}

and we suddenly change the Hamilontian with potential $V_0 \rightarrow V_F$ (weakening the “spring”). Professor Sipe gives us a graphical demo of this, by impersonating a constrained wavefunction with his arms, doing weak chicken-flapping of them. Now with the potential weakended, he wiggles and flaps his arms with more freedom and somewhat chaotically. His “wave function” arms are now bouncing around in the new limiting potential (initally over doing it and then bouncing back).

We had in this case the exact relation

\begin{aligned}H_0 {\lvert {\psi_0^{(0)}} \rangle} = \frac{1}{{2}} \hbar \omega_0 {\lvert {\psi_0^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.16)

but we also have

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle} = {\lvert {\psi_{\text{before}}} \rangle} = {\lvert {\psi_0^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.17)

and

\begin{aligned}H_F {\lvert {\psi_n^{(f)}} \rangle} = \frac{1}{{2}} \hbar \omega_F \left( n + \frac{1}{{2}} \right) {\lvert {\psi_n^{(f)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.18)

So

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle}&={\lvert {\psi_0^{(0)}} \rangle} \\ &=\sum_n {\lvert {\psi_n^{(f)}} \rangle}\underbrace{\left\langle{{\psi_n^{(f)}}} \vert {{\psi_0^{(0)}}}\right\rangle }_{c_n} \\ &=\sum_n c_n {\lvert {\psi_n^{(f)}} \rangle}\end{aligned}

and at later times

\begin{aligned}{\lvert {\psi(t)^{(f)}} \rangle}&={\lvert {\psi_0^{(0)}} \rangle} \\ &=\sum_n c_n e^{i \omega_n^{(f)} t} {\lvert {\psi_n^{(f)}} \rangle},\end{aligned}

whereas

\begin{aligned}{\lvert {\psi(t)^{(o)}} \rangle}&=e^{i \omega_0^{(0)} t} {\lvert {\psi_0^{(0)}} \rangle},\end{aligned}

So, while the wave functions may be exactly the same after such a sudden change in Hamiltonian, the dynamics of the situation change for all future times, since we now have a wavefunction that has a different set of components in the basis for the new Hamiltonian. In particular, the evolution of the wave function is now significantly more complex.

FIXME: plot an example of this.

FIXME: what does Adiabatic mean in this context. The usage in class sounds like it was just “really slow and gradual”, yet this has a definition “Of, relating to, or being a reversible thermodynamic process that occurs without gain or loss of heat and without a change in entropy”.

This is treated in section 17.5.2 of the text [1].

This is the reverse case, and we now vary the Hamiltonian $H(t)$ very slowly.

\begin{aligned}\frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = \frac{1}{{i \hbar}} H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(4.19)

We first consider only non-degenerate states, and at $t = 0$ write

\begin{aligned}H(0) = H_0,\end{aligned} \hspace{\stretch{1}}(4.20)

and

\begin{aligned}H_0 {\lvert {\psi_s^{(0)}} \rangle} = E_s^{(0)} {\lvert {\psi_s^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(4.21)

Imagine that at each time $t$ we can find the “instantaneous” energy eigenstates

\begin{aligned}H(t) {\lvert {\hat{\psi}_s(t)} \rangle} = E_s(t) {\lvert {\hat{\psi}_s(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.22)

These states do not satisfy Schr\”{o}dinger’s equation, but are simply solutions to the eigen problem. Our standard strategy in pertubation is based on analysis of

\begin{aligned}{\lvert {\psi(t)} \rangle} = \sum_n c_n(t) e^{- i \omega_n^{(0)} t} {\lvert {\psi_n^{(0)} } \rangle},\end{aligned} \hspace{\stretch{1}}(4.23)

\begin{aligned}{\lvert {\psi(t)} \rangle} = \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle},\end{aligned} \hspace{\stretch{1}}(4.24)

we will expand, not using our initial basis, but instead using the instananeous kets. Plugging into Schr\”{o}dinger’s equation we have

\begin{aligned}H(t) {\lvert {\psi(t)} \rangle} &= H(t) \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \\ &= \sum_n b_n(t) E_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned}

This was complicated before with matrix elements all over the place. Now it is easy, however, the time derivative becomes harder. Doing that we find

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle}&=i \hbar\frac{d{{}}}{dt} \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \\ &=i \hbar\sum_n \frac{d{{b_n(t)}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} + \sum_n b_n(t) \frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \\ &= \sum_n b_n(t) E_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned}

We bra ${\langle {\hat{\psi}_m(t)} \rvert}$ into this

\begin{aligned}i \hbar\sum_n \frac{d{{b_n(t)}}}{dt} \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle+ \sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} = \sum_n b_n(t) E_n(t) \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle ,\end{aligned} \hspace{\stretch{1}}(4.25)

and find

\begin{aligned}i \hbar\frac{d{{b_m(t)}}}{dt} + \sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} = b_m(t) E_m(t) \end{aligned} \hspace{\stretch{1}}(4.26)

If the Hamiltonian is changed very very slowly in time, we can imagine that ${\lvert {\hat{\psi}_n(t)} \rangle}'$ is also changing very very slowly, but we are not quite there yet. Let’s first split our sum of bra and ket products

\begin{aligned}\sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.27)

into $n \ne m$ and $n = m$ terms. Looking at just the $n = m$ term

\begin{aligned}{\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.28)

we note

\begin{aligned}0 &=\frac{d{{}}}{dt} \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle \\ &=\left( \frac{d{{}}}{dt} {\langle {\hat{\psi}_m(t)} \rvert} \right) {\lvert {\hat{\psi}_m(t)} \rangle} \\ + {\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle} \\ \end{aligned}

Something plus its complex conjugate equals 0

\begin{aligned}a + i b + (a + i b)^{*} = 2 a = 0 \implies a = 0,\end{aligned} \hspace{\stretch{1}}(4.29)

so ${\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle}$ must be purely imaginary. We write

\begin{aligned}{\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle} = -i \Gamma_s(t),\end{aligned} \hspace{\stretch{1}}(4.30)

where $\Gamma_s$ is real.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## My submission for PHY356 (Quantum Mechanics I) Problem Set 3.

Posted by peeterjoot on November 30, 2010

# Problem 1.

## Statement

A particle of mass $m$ is free to move along the x-direction such that $V(X)=0$. The state of the system is represented by the wavefunction Eq. (4.74)

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k x} e^{- i \omega t} f(k)\end{aligned} \hspace{\stretch{1}}(1.1)

with $f(k)$ given by Eq. (4.59).

\begin{aligned}f(k) &= N e^{-\alpha k^2}\end{aligned} \hspace{\stretch{1}}(1.2)

Note that I’ve inserted a $1/\sqrt{2\pi}$ factor above that isn’t in the text, because otherwise $\psi(x,t)$ will not be unit normalized (assuming $f(k)$ is normalized in wavenumber space).

\begin{itemize}
\item
(a) What is the group velocity associated with this state?
\item
(b) What is the probability for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?
\item
(c) What is the probability per unit length for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?
\item
(d) Explain the physical meaning of the above results.
\end{itemize}

## Solution

### (a). group velocity.

To calculate the group velocity we need to know the dependence of $\omega$ on $k$.

Let’s step back and consider the time evolution action on $\psi(x,0)$. For the free particle case we have

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} = -\frac{\hbar^2}{2m} \partial_{xx}.\end{aligned} \hspace{\stretch{1}}(1.3)

Writing $N' = N/\sqrt{2\pi}$ we have

\begin{aligned}-\frac{i t}{\hbar} H \psi(x,0) &= \frac{i t \hbar }{2m} N' \int_{-\infty}^\infty dk (i k)^2 e^{i k x - \alpha k^2} \\ &= N' \int_{-\infty}^\infty dk \frac{-i t \hbar k^2}{2m} e^{i k x - \alpha k^2}\end{aligned}

Each successive application of $-iHt/\hbar$ will introduce another power of $-it\hbar k^2/2 m$, so once we sum all the terms of the exponential series $U(t) = e^{-iHt/\hbar}$ we have

\begin{aligned}\psi(x,t) =N' \int_{-\infty}^\infty dk \exp\left( \frac{-i t \hbar k^2}{2m} + i k x - \alpha k^2 \right).\end{aligned} \hspace{\stretch{1}}(1.4)

Comparing with 1.1 we find

\begin{aligned}\omega(k) = \frac{\hbar k^2}{2m}.\end{aligned} \hspace{\stretch{1}}(1.5)

This completes this section of the problem since we are now able to calculate the group velocity

\begin{aligned}v_g = \frac{\partial {\omega(k)}}{\partial {k}} = \frac{\hbar k}{m}.\end{aligned} \hspace{\stretch{1}}(1.6)

## (b). What is the probability for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?

In order to evaluate the probability, it looks desirable to evaluate the wave function integral 1.4.
Writing $2 \beta = i/(\alpha + i t \hbar/2m )$, the exponent of that integral is

\begin{aligned}-k^2 \left( \alpha + \frac{i t \hbar }{2m} \right) + i k x&=-\left( \alpha + \frac{i t \hbar }{2m} \right) \left( k^2 - \frac{i k x }{\alpha + \frac{i t \hbar }{2m} } \right) \\ &=-\frac{i}{2\beta} \left( (k - x \beta )^2 - x^2 \beta^2 \right)\end{aligned}

The $x^2$ portion of the exponential

\begin{aligned}\frac{i x^2 \beta^2}{2\beta} = \frac{i x^2 \beta}{2} = - \frac{x^2 }{4 (\alpha + i t \hbar /2m)}\end{aligned}

then comes out of the integral. We can also make a change of variables $q = k - x \beta$ to evaluate the remainder of the Gaussian and are left with

\begin{aligned}\psi(x,t) =N' \sqrt{ \frac{\pi}{\alpha + i t \hbar/2m} } \exp\left( - \frac{x^2 }{4 (\alpha + i t \hbar /2m)} \right).\end{aligned} \hspace{\stretch{1}}(1.7)

Observe that from 1.2 we can compute $N = (2 \alpha/\pi)^{1/4}$, which could be substituted back into 1.7 if desired.

Our probability density is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 &=\frac{1}{{2 \pi}} N^2 {\left\lvert{ \frac{\pi}{\alpha + i t \hbar/2m} }\right\rvert} \exp\left( - \frac{x^2}{4} \left( \frac{1}{{(\alpha + i t \hbar /2m)}} + \frac{1}{{(\alpha - i t \hbar /2m)}} \right) \right) \\ &=\frac{1}{{2 \pi}} \sqrt{\frac{2 \alpha}{\pi} } \frac{\pi}{\sqrt{\alpha^2 + (t \hbar/2m)^2 }} \exp\left( - \frac{x^2}{4} \frac{1}{{\alpha^2 + (t \hbar/2m)^2 }} \left( \alpha - i t \hbar /2m + \alpha + i t \hbar /2m \right)\right) \\ &=\end{aligned}

With a final regrouping of terms, this is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 =\sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right).\end{aligned} \hspace{\stretch{1}}(1.8)

As a sanity check we observe that this integrates to unity for all $t$ as desired. The probability that we find the particle at position $x > x_0$ is then

\begin{aligned}P_{x>x_0}(t) = \sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\int_{x=x_0}^\infty dx \exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.9)

The only simplification we can make is to rewrite this in terms of the complementary error function

\begin{aligned}\text{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty e^{-t^2} dt.\end{aligned} \hspace{\stretch{1}}(1.10)

Writing

\begin{aligned}\beta(t) = \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 },\end{aligned} \hspace{\stretch{1}}(1.11)

we have

\begin{aligned}P_{x>x_0}(t_0) = \frac{1}{{2}} \text{erfc} \left( \sqrt{\beta(t_0)/2} x_0 \right)\end{aligned} \hspace{\stretch{1}}(1.12)

Sanity checking this result, we note that since $\text{erfc}(0) = 1$ the probability for finding the particle in the $x>0$ range is $1/2$ as expected.

## (c). What is the probability per unit length for measuring the particle at position $x=x_0>0$ at time $t=t_0>0$?

This unit length probability is thus

\begin{aligned}P_{x>x_0+1/2}(t_0) - P_{x>x_0-1/2}(t_0) &=\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0+\frac{1}{{2}} \right) \right) -\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0-\frac{1}{{2}} \right) \right) \end{aligned} \hspace{\stretch{1}}(1.13)

## (d). Explain the physical meaning of the above results.

To get an idea what the group velocity means, observe that we can write our wavefunction 1.1 as

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k (x - v_g t)} f(k)\end{aligned} \hspace{\stretch{1}}(1.14)

We see that the phase coefficient of the Gaussian $f(k)$ “moves” at the rate of the group velocity $v_g$. Also recall that in the text it is noted that the time dependent term 1.11 can be expressed in terms of position and momentum uncertainties $(\Delta x)^2$, and $(\Delta p)^2 = \hbar^2 (\Delta k)^2$. That is

\begin{aligned}\frac{1}{{\beta(t)}} = (\Delta x)^2 + \frac{(\Delta p)^2}{m^2} t^2 \equiv (\Delta x(t))^2\end{aligned} \hspace{\stretch{1}}(1.15)

This makes it evident that the probability density flattens and spreads over time with the rate equal to the uncertainty of the group velocity $\Delta p/m = \Delta v_g$ (since $v_g = \hbar k/m$). It is interesting that something as simple as this phase change results in a physically measurable phenomena. We see that a direct result of this linear with time phase change, we are less able to find the particle localized around it’s original time $x = 0$ position as more time elapses.

# Problem 2.

## Statement

A particle with intrinsic angular momentum or spin $s=1/2$ is prepared in the spin-up with respect to the z-direction state ${\lvert {f} \rangle}={\lvert {z+} \rangle}$. Determine

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.16)

and

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.17)

and explain what these relations say about the system.

## Solution: Uncertainty of $S_z$ with respect to ${\lvert {z+} \rangle}$

Noting that $S_z {\lvert {f} \rangle} = S_z {\lvert {z+} \rangle} = \hbar/2 {\lvert {z+} \rangle}$ we have

\begin{aligned}{\langle {f} \rvert} S_z {\lvert {f} \rangle} = \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(2.18)

The average outcome for many measurements of the physical quantity associated with the operator $S_z$ when the system has been prepared in the state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$ is $\hbar/2$.

\begin{aligned}\Bigl(S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \Bigr) {\lvert {f} \rangle}&= \frac{\hbar}{2} {\lvert {f} \rangle} -\frac{\hbar}{2} {\lvert {f} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.19)

We could also compute this from the matrix representations, but it is slightly more work.

Operating once more with $S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1}$ on the zero ket vector still gives us zero, so we have zero in the root for 2.16

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = 0\end{aligned} \hspace{\stretch{1}}(2.20)

What does 2.20 say about the state of the system? Given many measurements of the physical quantity associated with the operator $V = (S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1})^2$, where the initial state of the system is always ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$, then the average of the measurements of the physical quantity associated with $V$ is zero. We can think of the operator $V^{1/2} = S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1}$ as a representation of the observable, “how different is the measured result from the average ${\langle {f} \rvert} S_z {\lvert {f} \rangle}$”.

So, given a system prepared in state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$, and performance of repeated measurements capable of only examining spin-up, we find that the system is never any different than its initial spin-up state. We have no uncertainty that we will measure any difference from spin-up on average, when the system is prepared in the spin-up state.

## Solution: Uncertainty of $S_x$ with respect to ${\lvert {z+} \rangle}$

For this second part of the problem, we note that we can write

\begin{aligned}{\lvert {f} \rangle} = {\lvert {z+} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ).\end{aligned} \hspace{\stretch{1}}(2.21)

So the expectation value of $S_x$ with respect to this state is

\begin{aligned}{\langle {f} \rvert} S_x {\lvert {f} \rangle}&=\frac{1}{{2}}( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) S_x ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) \\ &=\hbar ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( {\lvert {x+} \rangle} - {\lvert {x-} \rangle} ) \\ &=\hbar ( 1 + 0 + 0 -1 ) \\ &= 0\end{aligned}

After repeated preparation of the system in state ${\lvert {f} \rangle}$, the average measurement of the physical quantity associated with operator $S_x$ is zero. In terms of the eigenstates for that operator ${\lvert {x+} \rangle}$ and ${\lvert {x-} \rangle}$ we have equal probability of measuring either given this particular initial system state.

For the variance calculation, this reduces our problem to the calculation of ${\langle {f} \rvert} S_x^2 {\lvert {f} \rangle}$, which is

\begin{aligned}{\langle {f} \rvert} S_x^2 {\lvert {f} \rangle} &=\frac{1}{{2}} \left( \frac{\hbar}{2} \right)^2 ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( (+1)^2 {\lvert {x+} \rangle} + (-1)^2 {\lvert {x-} \rangle} ) \\ &=\left( \frac{\hbar}{2} \right)^2,\end{aligned}

so for 2.22 we have

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = \frac{\hbar}{2}\end{aligned} \hspace{\stretch{1}}(2.22)

The average of the absolute magnitude of the physical quantity associated with operator $S_x$ is found to be $\hbar/2$ when repeated measurements are performed given a system initially prepared in state ${\lvert {f} \rangle} = {\lvert {z+} \rangle}$. We saw that the average value for the measurement of that physical quantity itself was zero, showing that we have equal probabilities of measuring either $\pm \hbar/2$ for this experiment. A measurement that would show the system was in the x-direction spin-up or spin-down states would find that these states are equi-probable.

I lost one mark on the group velocity response. Instead of 3.23 he wanted

\begin{aligned}v_g = {\left. \frac{\partial {\omega(k)}}{\partial {k}} \right\vert}_{k = k_0}= \frac{\hbar k_0}{m} = 0\end{aligned} \hspace{\stretch{1}}(3.23)

since $f(k)$ peaks at $k=0$.

I’ll have to go back and think about that a bit, because I’m unsure of the last bits of the reasoning there.

I also lost 0.5 and 0.25 (twice) because I didn’t explicitly state that the probability that the particle is at $x_0$, a specific single point, is zero. I thought that was obvious and didn’t have to be stated, but it appears expressing this explicitly is what he was looking for.

Curiously, one thing that I didn’t loose marks on was, the wrong answer for the probability per unit length. What he was actually asking for was the following

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 - \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x_0, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.24)

That’s a whole lot more sensible seeming quantity to calculate than what I did, but I don’t think that I can be faulted too much since the phrase was never used in the text nor in the lectures.

## Relativistic origins of the Schrödinger equation

Posted by peeterjoot on July 5, 2009

# Goals and approach.

Most introductory Quantum texts present some attempt to motivate Schrödinger’s equation. The quality, size and approach of each of these ranges widely.

Pauli’s Wave Mechanics ([1]) differs from most of these, utilizing relativistic arguments to motivate the Schrödinger equation. His little quantum book starts off, not with the Bohr model or black bodies, but with a lighting fast two page treatment of special relativity.

The Wikipedia Klein-Gordon article ([2]) indicates that this is also the historical approach used initially by Schrödinger.

This blog entry follows Pauli’s treatment closely. The starting point will not be “see optics”, but an attempt at a logic progression building on basic results of electromagnetism, Fourier techniques, and Lorentz invariance.

From special relativity the Lorentz invariant for energy and momentum $E^2 - c^2 \mathbf{p}^2 = (m c^2)^2$ is required. An optional review of how this follows from the definition of Lorentz invariant length is included below. For any sort of complete coverage of special relativity other sources should be consulted.

Fourier transforms will be used to find general solutions of the wave equation for components of the electric or magnetic fields in vacuum

$\square \mathbf{E} \equiv \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2} - \boldsymbol{\nabla}^2 \mathbf{E} = 0 \quad\quad\quad(1)$
$\square \mathbf{B} \equiv \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2} - \boldsymbol{\nabla}^2 \mathbf{B} = 0 \quad\quad\quad(2)$

Fixing notation, the symmetric transform pair convention will be used

$\mathcal{F}(f(\mathbf{x})) = \hat{f}(\mathbf{k}) = \frac{1}{(\sqrt{2\pi})^3} \int f(\mathbf{x}) \exp\left( -i \mathbf{k} \cdot \mathbf{x} \right) d^3 x \quad\quad\quad(3)$
$\mathcal{F}^{-1}({f}(\mathbf{k})) = f(\mathbf{x}) = \frac{1}{(\sqrt{2\pi})^3} \int \hat{f}(\mathbf{k}) \exp\left( i \mathbf{k} \cdot \mathbf{x} \right) d^3 k \quad\quad\quad(4)$

As with Fourier solutions of the heat equation ([3]), the wave equation when expressed in the wave number domain will be a much simpler equation to solve.

A relation between the invariant length of the energy momentum four vector for light and the electrodynamic wave equation solution will be observed. Using this observation, as well as the quantization by frequency from the photoelectric effect, and the DeBroglie hypothesis will allow for formation of a natural relativistic matter wave equation (ie: the Klein-Gordon equation).

Finally, a Taylor expansion of wave function solutions to the Klein-Gordon equation around the rest angular frequency will be made. The end result will be finding the traditional introductory form of the Schrödinger’s equation hiding in this relativistic matter wave equation.

# Relativity prerequisites.

In these notes the space time trajectory of a particle will be represented as the pair of locally observable quantities or a column vector equivalent

$X = (ct, \mathbf{x})$

In analogy to the distance invariance with respect to rotation in Euclidean space, the
invariant (squared) length of a four vector with respect to Lorentz transformation is

$X^2 \equiv c^2 t^2 - \mathbf{x}^2 \equiv c^2 t^2 - \mathbf{x} \cdot \mathbf{x}$

One can verify without any trouble that such a generalized length is unchanged by rotation

$\begin{bmatrix}ct' \\ x' \\ y' \\ z' \\ \end{bmatrix}=\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos\theta & \sin\theta \\ 0 & 0 & -\sin\theta & \cos\theta \\ \end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z \\ \end{bmatrix}$

And also unchanged by Lorentz boost
$\begin{bmatrix}ct' \\ x' \\ y' \\ z' \\ \end{bmatrix}=\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z \\ \end{bmatrix}$

Although the Lorentz length of a four vector does not change under rotation or boost (or composition of the two), that does not mean that this length is a constant. Consider the worldline of a particle at rest at the origin of the observer frame

$X = (c t, 0)$

The Lorentz length in this frame is $c^2 t^2$. In general the Lorentz length will be a function of all the coordinates.

Those vectors that have constant length are particularly useful, and can be constructed from scalar multiples of unit vectors. In particular for the time evolution of a particle’s worldline from an observer frame one has

$\frac{dX}{dt} = \left(c, \frac{d\mathbf{x}}{dt}\right)$

Writing $\mathbf{v} = d\mathbf{x}/dt$, the Lorentz length and corresponding unit vector $V \equiv {\frac{dX}{dt}}/{\sqrt{\left(\frac{dX}{dt}\right)^2}}$ are then, respectively,

$\left(\frac{dX}{dt}\right)^2 = c^2 - \mathbf{v}^2$
$V = \frac{1}{\sqrt{1 - \mathbf{v}^2/c^2 }} \left(1, \mathbf{v}/c \right)$

Finally, a scaling by $mc$ of this dimensionless “proper” velocity $V$ yields a vector with dimensions of momentum, the relativistic energy momentum vector (a definition). This vector and its Lorentz length are
$P \equiv mc V = \frac{1}{\sqrt{1 - \mathbf{v}^2/c^2 }} \left(mc^2/c, m \mathbf{v} \right) = (E/c, \mathbf{p}) \quad\quad\quad(13)$
$E^2/c^2 - \mathbf{p}^2 = m^2 c^2 \quad\quad\quad(14)$

When the particle is observed at rest ($\mathbf{p}=0$), the Lorentz length (14) provides the familiar $E= m c^2$ relation.
Observe that for the Lorentz length of this energy momentum pairing to come out so nicely constant, the relativistic definitions
of energy and momentum are required

$E \equiv \frac{m c^2}{\sqrt{1 - \mathbf{v}^2/c^2}} = m c^2 + \frac{1}{2}m \mathbf{v}^2 + \cdots \quad\quad\quad(15)$
$\mathbf{p} \equiv \frac{m \mathbf{v}}{\sqrt{1 - \mathbf{v}^2/c^2}} = m \mathbf{v} + \frac{1}{2}m \mathbf{v}^3/c^2 + \cdots \quad\quad\quad(16)$

Only in the small velocity limits are the Newtonian kinetic energy $m\mathbf{v}^2/2$ and momentum $m\mathbf{v}$ the only significant portions of
the Taylor series.

# Light quantization and DeBroglie hypothesis

The notion that light is quantized coming in discrete frequency dependent packets of energy and momentum, a photon, is now a familiar one.

In symbols

$E = h \nu = \hbar \omega$

With zero mass for a photon, the invariance relation (14) implies that the magnitude of the photon momentum is not independent of $\omega$ and in fact must be

${\left\lvert{\mathbf{p}}\right\rvert} = \frac{\hbar \omega}{c}$

It is customary to write

${\mathbf{p}} = \hbar \mathbf{k}$

so that the energy momentum four vector for a photon is

$P = \hbar ( \omega/c, \mathbf{k} )$

DeBroglie’s extension ([4]) of the quantum relation for photon energy was to write for non-massless particles

$h \nu = \frac{m c^2}{\sqrt{1 - \mathbf{v}^2/c^2}}$

This, together with (14), provides a quantized invariant relation for energy momentum

$\hbar^2 \left( \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) = m^2 c^2 \quad\quad\quad(22)$

Once the solution of the wave equation for light (i.e. electromagnetic fields) has been examined, this invariant can be used directly to construct a relativistic wave equation for matter (the Klein-Gordon equation), which is the next step along this path to the traditional non-relativistic Schrödinger equation.

# Solution of the relativistic wave equation.

The next order of business is the solution of the wave equations for the six equations (1), and (2). Writing $\psi$ for one of the components of $\mathbf{E}$ or $\mathbf{B}$ one is left with a scalar homogeneous equation to solve

$\left( \frac{1}{c^2}\frac{\partial^2}{{\partial t}^2} - \boldsymbol{\nabla}^2 \right) \psi(t,\mathbf{x}) = 0 \quad\quad\quad(23)$

Let’s begin the attack, applying the transform (3) to both terms of the wave equation

$\frac{1}{(\sqrt{2\pi})^3} \int \frac{1}{c^2} \frac{\partial^2 \psi(t,\mathbf{x})}{\partial t^2}\exp\left( -i \mathbf{k} \cdot \mathbf{x} \right) d^3 x =\sum_{m=1}^3 \frac{1}{(\sqrt{2\pi})^3} \int \frac{\partial^2 \psi(t, \mathbf{x}) }{\partial {x_m}^2} \exp\left( -i \mathbf{k} \cdot \mathbf{x} \right) d^3 x$

Now send the Rigor police on vacation, demanding of $\psi$ that it and its derivatives vanish at the boundaries of the integration region, and that sufficient continuity exists that the time derivatives can be pulled out of the LHS integral. With this demand of good behavior made, pull the time differentiation out of the integral on the LHS and integrate by parts twice on the RHS for

$\frac{1}{c^2} \frac{\partial^2 \hat{\psi}(t,\mathbf{k})}{\partial t^2} = (-i)^2 \mathbf{k}^2 \hat{\psi}(t,\mathbf{k})$

With the exception that integration constant may be a function of $\mathbf{k}$ due to the time partials, this is the harmonic oscillator equation with solution

$\hat{\psi}_{\pm}(t,\mathbf{k}) = D_{\pm}(\mathbf{k}) \exp(\pm i c {\left\lvert{\mathbf{k}}\right\rvert} t) \quad\quad\quad(26)$

Evaluation at $t=0$ eliminates the exponential, so by (4) the integration constants $D_{\pm}(\mathbf{k})$ may be expressed in terms of the initial time Fourier transforms of the wave function.

$D_{\pm}(\mathbf{k}) = \hat{\psi}_{\pm}(0,\mathbf{k}) = \frac{1}{(\sqrt{2\pi})^3} \int \psi_{\pm}(0,\mathbf{x}) \exp\left( -i \mathbf{k} \cdot \mathbf{x} \right) d^3 x \quad\quad\quad(27)$

Writing the inverse Fourier transformation (4) now completely specifies the time evolution of these wave function solutions given the initial time field

${\psi}_{\pm}(t,\mathbf{x}) = \frac{1}{(2\pi)^3} \int \psi_{\pm}(0,\mathbf{x}') \exp\left( -i \mathbf{k} \cdot (\mathbf{x}' -\mathbf{x}) \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t \right) d^3 x' d^3 k$

Many interesting things can be done with this result and most will be ignored here. Instead put the evaluational integration into a black box, avoiding any explicit statement of the initial conditions (the Rigor police are still on vacation and they can’t catch this blatant disregard for integration order). Dropping explicit $\pm$ subscripts for the $\mathbf{k}$ dependent function of the integral the wave function is now

$A(\mathbf{k}) = \frac{1}{({2\pi})^3} \int \psi(0,\mathbf{x}') \exp\left( -i \mathbf{k} \cdot \mathbf{x}' \right) d^3 x'$
${\psi}(t,\mathbf{x}) = \int A(\mathbf{k}) \exp\left( i \mathbf{k} \cdot \mathbf{x} \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t \right) d^3 k \quad\quad\quad(30)$

Inspection shows that $c {\left\lvert{\mathbf{k}}\right\rvert}$ has the appearance of angular velocity, and a slightly more conventional looking form can be achieved by making this explicit

$\omega = c {\left\lvert{\mathbf{k}}\right\rvert} \quad\quad\quad(31)$
${\psi}(t,\mathbf{x}) = \int A(\mathbf{k}) e^{ i (\mathbf{k} \cdot \mathbf{x} \pm \omega t) } d^3 k \quad\quad\quad(32)$

This (constrained) superposition of fundamental harmonics represents a general solution to the wave equation for the components
of the electromagnetic field.

Forgetting temporarily the lightlike constraint (31) on angular frequency observe the effect of applying the wave equation operator to
(32)

$\square {\psi}(t,\mathbf{x}) = -\int A(\mathbf{k}) \left( \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) e^{ i (\mathbf{k} \cdot \mathbf{x} \pm \omega t) } d^3 k \quad\quad\quad(33)$

It is clear that functions of the form $f(\mathbf{k} \cdot \mathbf{x} \pm c {\left\lvert{\mathbf{k}}\right\rvert} t)$ explicitly encode the null vector properties required for light-like worldline trajectories. If this strict proportionality between angular frequency and wave number is relaxed then it is reasonable to assume that such a wave function could then describe phenomena (for massive particles) within the light cone.

In particular observe the effect in (33) if the DeBroglie invariant (22) is applied to (32)

$\square {\psi}(t,\mathbf{x}) = -\frac{m^2 c^2}{\hbar^2} \psi$

This modified wave equation (the Klein-Gordon equation) still describes electric and magnetic fields since photons are massless, but it additionally is not unreasonable seeming as a wave equation for particles with mass.

# Taylor expansion of the Klein-Gordon equation around the rest angular frequency.

To transition from the covariant Klein-Gordon equation to one with an explicit spacetime split, consider an angular momentum approximation similar to that used for Kinetic energy in (15). From the DeBroglie invariant (22) rearrange for the angular frequency

$\omega = \frac{m c^2}{\hbar} \sqrt{ 1 + \frac{\hbar^2 \mathbf{k}^2}{m^2 c^2}}$

If $(\hbar^2 \mathbf{k}^2)/(m^2 c^2) < 1$ is small enough a Taylor expansion is possible

$\omega = \frac{m c^2}{\hbar} + \frac{\hbar \mathbf{k}^2}{2 m} + \cdots$

With the zeroth order term factored out the wave function integral (32) becomes

${\psi}(t,\mathbf{x}) = e^{\pm im c^2 t /\hbar} \int A(\mathbf{k}) \exp\left( i \left(\mathbf{k} \cdot \mathbf{x} \pm \left(\frac{\hbar \mathbf{k}^2}{2 m} + \cdots \right) \right) \right) d^3 k$

It is natural to bundle the integral into a helper variable

${\psi}(t,\mathbf{x}) = e^{\pm im c^2 t /\hbar} \psi'(t,\mathbf{x}) \quad\quad\quad(38)$

Note that there is not actually any requirement to drop the quadratic and higher order terms here. If doing so, one could call this a small momentum approximation. A more accurate description is probably a Taylor expansion around the rest frequency $m c^2/\hbar$.

Application of the wave equation operator to the product (38) is now possible. Let’s do this in pieces, starting with the time derivatives

$\frac{1}{c^2}\frac{\partial^2}{\partial t^2} \left( e^{\pm im c^2 t /\hbar} \psi' \right)=\frac{1}{c^2}\frac{\partial}{\partial t} \left( \left( \pm \frac{i m c^2}{\hbar} \psi' + \frac{\partial \psi'}{\partial t} \right) e^{\pm im c^2 t /\hbar} \right)$

Second partials give
$\frac{1}{c^2}\frac{\partial^2}{\partial t^2} e^{\pm im c^2 t /\hbar} \psi' = \frac{1}{c^2} \left( - \left(\frac{m c^2}{\hbar}\right)^2 \psi' \pm \frac{2 i m c^2}{\hbar} \frac{\partial \psi'}{\partial t} + \frac{\partial^2 \psi'}{\partial t^2} \right) e^{\pm im c^2 t /\hbar}$

And finally for the entire wave equation
$0 = \left( \square + \frac{m^2 c^2}{\hbar^2} \right) \psi=\frac{1}{c^2}\left(\left( \pm 2 i \frac{m c^2}{\hbar} \frac{\partial \psi'}{\partial t} +\frac{\partial^2 \psi'}{\partial t^2}\right)-\boldsymbol{\nabla}^2 \psi'\right)\exp\left( i \frac{m c^2}{\hbar} t \right)$

A final rearrangement produces something quite close to the Schrödinger equation as it is probably first seen in the (less general) non-Hamiltonian form

$-\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \psi' = \mp i \hbar \frac{\partial \psi'}{\partial t} -\frac{\hbar^2}{2m c^2} \frac{\partial^2 \psi'}{\partial t^2}$

There are two notable differences, one is the sign and the other is the second order time partial. Comparisons in size for coefficients of this wave equation have to be made relative to the other coefficients since $\hbar$ is already a small quantity, however, if $m$ is the mass of an electron this second time partial coefficient is of the order $\hbar/(m_e c^2) \approx 10^{-21}$ (seconds). This is small enough that omitting it is justifiable.

Once that term is dropped two equations are left, one of which is the three dimensional potential free Schrödinger’s equation.

$-\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \psi' = \mp i \hbar \frac{\partial \psi'}{\partial t} \quad\quad\quad(43)$

The alternation in sign is suggestive of conjugate behavior, but it is helpful to know to expect this in the first place!

This presentation attempts to show that the Schrödinger equation (43) has some deeply rooted relativistic origins. When the spatial and time derivatives in the Schrödinger equation aren’t even of the same order it isn’t obvious that quantum mechanics and relativity have any sort of association with each other. This is reflected in treatments of Quantum mechanics, where many introductory QM texts will not mention relativity at all, except perhaps to note that the Schrödinger equation is not valid at speeds where $v/c$ is significant. Finding a statement like “There is no inherent connection between special relativity and quantum mechanics” ([5]), shows that this apparent disconnect goes both ways.

There are obvious deficiencies in this treatment. In particular the quantization of light was glossed over, and if it had been pursued, would be grossly wrong. Part of the problem is that solutions of Maxwell’s equations in vacuum are not entirely equivalent to six independent wave equations for the components of the fields. Additional constraints are also imposed by Maxwell equations, introducing a coupling between the field components. For example in a linearly polarized plane wave one has $\mathbf{B} = \hat{\mathbf{k}} \times \mathbf{E}$, and the triplet of $\mathbf{E}$, $\mathbf{B}$, and $\hat{\mathbf{k}}$ (the propagation direction) form a set of mutually perpendicular vectors ([6]). An expectation calculation based on a quantized light equation that neglects this coupling cannot possibly recover Maxwell’s equations. One could perhaps start with the simpler four vector potential Maxwell wave equations $\partial_\mu\partial^\mu A^\nu = 0$ under the Lorentz gauge $\partial_\mu A^\mu = 0$. That would reduce the problem to dealing with only four coupled equations intead of six, and the coupling is considerably simpler. Perhaps if this were pursued more carefully one would end up with QFT. That’s an interesting potential digression, but not the goal here and now.

As mentioned previously the goal was really to highlight some of the relativistic connections of quantum mechanics. A secondary goal was personal, having never seen any single completely satisfying attempt to motivate the Schrödinger equation, it seemed reasonable to attempt enunciation of this myself. Given my current point in time understanding of mathematics and physics, Pauli's SR based motivation was found to be one of the most logical, had no steps that were particularly surprising, but suffered from too much brevity. This made it a good starting point, and perhaps the result is slightly more accessible.

Somebody who knows Quantum Mechanics well (or Quantum field theory) would likely consider these notes completely backwards. One can logically go from Quantum to classical, but going the other way around is impossible. That is quite likely true, however to somebody like this author, just starting to learn QM, pointing this out is not particularly helpful seeming.

# Some other approaches for motivating the Schrödinger’s equation to compare with.

Many other methods of motivating the Schrödinger equation are considerably simpler and shorter that this one, however
this may however come at the expense of a corresponding excess of magic steps.

French and Taylor ([7]), arrives at the specialization of Schrödinger’s equation for a particle in a one dimensional potential

$-\frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2} + V\psi = i \hbar \frac{\partial \psi}{\partial t} \quad\quad\quad(44)$

in only seven pages with no mathematics or physics unfamiliar to second year engineering undergrads (this is a remarkable success IMO). It is worth noting that they include a disclaimer upfront “we … simply try to make the form of the equation plausible”. The plausibility arguments found in this text are not uncommon, and can be found for example (without the surrounding discussion of the text) in the “Short heuristic derivation” of the Wikipedia’s Schrödinger’s equation article ([8]).

Calling these derivations is justifiably debatable and is perhaps the origin of statements like “Schrödinger’s equation cannot be derived” ([9]).

Considerable care is required to construct logically consistent arguments that motivate the quantum wave equation in a fashion that is not simply playing the equation in reverse. Bohm’s Quantum Theory ([10]) contains such a carefully crafted treatment. The cost of this presentation is that it takes nine chapters and two hundred pages to get to the starting point of many other introductory Quantum texts.

Heisenberg apparently was able to build his matrix mechanics using observational data (spectral measurements) as a starting point. That would be a desirable motivational attempt, but there also appears to be agreement that his construction was something that nobody else in the right mind would have thought of. Susskind even says in one of his lectures of this “I don’t know what he was smoking”!

Susskind’s method of teaching QM goes straight to the point, and he uses nothing but Dirac’s axiomatic formulation. There’s a lot of abstraction in that approach and it is perhaps not the most palatable technique available to a new learner.

Perhaps agreeing with a it cannot be derived opinion, the text of Liboff ([11]) appears to take a “let’s calculate approach”. There is little attempt to motivate the equation, instead presenting the equation rather abstractly as an operatorizing of the Hamiltonian. This requires the magic identification $\mathbf{p} \sim -i \hbar \boldsymbol{\nabla}$, something harder to make plausible in a classical context. Instead one is left to learn its characteristics by using it. This engineering approach has some merits but must also contribute to much of the mystery and confusion surrounding the subject. The most classic example of this is Feynman’s famous quote “I think I can safely say that nobody understands quantum mechanics”.

# References

[1] W. Pauli. Wave Mechanics. Courier Dover Publications, 2000.

[2] Wikipedia. Klein-gordon equation — wikipedia, the free encyclopedia [online]. 2009. [Online; accessed 5-May-2009]. \url{http://en.wikipedia.org/w/index.php?title=Klein%E2%80%93Gordon_equation%&oldid=288115284}.

[3] Prof. Brad Osgood. The Fourier Transform and its Applications. [online]. http://www.stanford.edu/class/ee261/book/all.pdf.

[4] A. F. Kracklauer. Louis de Broglie (Thesis): On the theory of Quanta [online]. http://www.ensmp.fr/aflb/LDB-oeuvres/De_Broglie_Kracklauer.pdf [cited 19 June 2009].

[5] H. Goldstein. Classical mechanics. Cambridge: Addison-Wesley Press, Inc, 1st edition, 1951.

[6] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[7] A.P. French and E.F. Taylor. An Introduction to Quantum Physics. CRC Press, 1998.

[8] Wikipedia. Schrödinger equation (short heuristic derivation) — wikipedia, the free encyclopedia [online]. 2009. [Online; accessed 29-June-2009]. \url{http://en.wikipedia.org/w/index.php?title=Schr%C3%B6dinger_equation&old%id=299325461}.

[9] Carl R. (Rod) Nave. Free particle approach to the Schrodinger equation [online]. http://hyperphysics.phy-astr.gsu.edu/Hbase/quantum/Schr2.html [cited 19 June 2009].

[10] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[11] R. Liboff. Introductory quantum mechanics. 2003.