## Curious problem using the variational method to find the ground state energy of the Harmonic oscillator (cont.)

Posted by peeterjoot on October 7, 2011

Picking up where this problem was last abandoned.

# Calculating the Fourier terms

Our wave function, with is plotted in figure (\ref{fig:expMinusBetsAbsX})

\begin{figure}[htp]

\centering

\includegraphics[totalheight=0.2\textheight]{expMinusBetsAbsX}

\caption{Exponential trial function with absolute exponential die off.}

\end{figure}

The zeroth order fitting using the gaussian exponential is found to be

With , this is plotted in figure (\ref{fig:expMinusBetsAbsXfirstOrderFitting}) and can be seen to match fairly well

\begin{figure}[htp]

\centering

\includegraphics[totalheight=0.2\textheight]{expMinusBetsAbsXfirstOrderFitting}

\caption{First ten orders, fitting harmonic oscillator wavefunctions to this trial function.}

\end{figure}

The higher order terms get small fast, but we can see in figure (\ref{fig:expMinusBetsAbsXtenthOrderFitting}), where a tenth order fitting is depicted that it would take a number of them to get anything close to the sharp peak that we have in our exponential trial function.

\begin{figure}[htp]

\centering

\includegraphics[totalheight=0.2\textheight]{expMinusBetsAbsXtenthOrderFitting}

\caption{Tenth order harmonic oscillator wavefunction fitting.}

\end{figure}

Note that the all the brakets of even orders in with the trial function are zero, which is why the tenth order approximation is only a sum of six terms.

Details for this \href{https://github.com/peeterjoot/physicsplay/blob/master/notes/phy456/gaussian\

The question of interest is why if we can approximate the trial function so nicely (except at the origin) even with just a first order approximation (polynomial times gaussian functions where the polynomials are Hankel functions), and we can get an exact value for the lowest energy state using the first order approximation of our trial function, why do we get garbage if we include enough terms that the peak is sharp? It must therefore be important to consider the origin, but how do we give some meaning to the derivative of the absolute value function? The key, supplied when asking in office hours for the course, is to express the absolute value function in terms of Heavyside step functions, for which the derivative can be identified as the delta function.

# Correcting, treating the origin this way.

Here’s how we can express the absolute value function using the Heavyside step

where the step function is zero for as plotted in figure (\ref{fig:stepFunction}).

\begin{figure}[htp]

\centering

\includegraphics[totalheight=0.2\textheight]{stepFunction}

\caption{stepFunction}

\end{figure}

Expressed this way, with the identification , we have for the derivative of the absolute value function

Observe that we have our expected unit derivative for , and derivative for . At the origin our contributions vanish, and we are left with

We’ve got zero times infinity here, so how do we give meaning to this? As with any delta functional, we’ve got to apply it to a well behaved (square integrable) test function and integrate. Doing so we have

This equals zero for any well behaved test function . Since the delta function only picks up the contribution at the origin, we can therefore identify as zero at the origin.

Using the same technique, we can express our trial function in terms of steps

This we can now take derivatives of, even at the origin, and find

Taking second derivatives we find

Now application of the Hamiltonian operator on our trial function gives us

so

Normalized we have

This is looking much more promising. We’ll have the sign alternation that we require to find a positive, non-complex, value for when is minimized. That is

so the extremum is found at

Plugging this back in we find that our trial function associated with the minimum energy (unnormalized still) is

and that energy, after substitution, is

We have something that’s the true ground state energy, but is at least a ball park value. However, to get this result, we have to be very careful to treat our point of singularity. A derivative that we’d call undefined in first year calculus, is not only defined, but required, for this treatment to work!

## Mike Bantegui said

Hi Peeter,

I was working through your algebra from your previous post and I noticed you picked up an extra minus sign somewhere, which led to an imaginary number after you took a square root. You arrive at the same energy as before, but this time around you didn’t the factor of i. I double checked my work on paper and through Mathematica, and I can confirm that the minimum energy is indeed what you find. Solving the problem through the standard methods as before — By splitting the integral into separate portions (or equivalently integrating over one half of the whole plane and doubling that result) — I was able to obtain the correct expression for beta squared which was \pm \frac{m \omega}{\sqrt{2} \hbar}.

Specifically, within the pdf you have available on this post I see the error is between equations 9 and 10. You’ll see that when you differentiate 9 with respect to beta, one of the minus signs from equation 9 disappeared. If you fix that, you get the same exact results onwards as you did here.

You can try the following Mathematica expression and you’ll see what I mean:

Solve[D[(\[Beta]^2 \[HBar]^2)/(2 m) + (m \[Omega]^2)/(

4 \[Beta]^2), \[Beta]] == 0, \[Beta]]

## peeterjoot said

The extra minus sign in (4.10) (or (9) in the pdf) is a typo. If you look back to just before ‘Note that evaluating this’ in the previous post (or just before ‘A naive evaluation of this integral’ in the pdf which is now a bit different), you’ll see there’s only one negative sign, and it appears I’ve corrected things on the next line. This still leaves the imaginary trouble, only resolved by the delta function handling of the absolute value’s derivative at the origin.