Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for December, 2010

Just Energy Canada. Google it and find the scam links.

Posted by peeterjoot on December 30, 2010

Am still trying to get out of my Just Energy “Contract“.  I found it interesting that googling `Just Energy Canada` so easy finds scam related content:

Also telling is that Google`s first related search suggestion is:

Searches related to just energy canada

 

Advertisements

Posted in Incoherent ramblings | Tagged: , , , , , | 2 Comments »

The kids newest stop motion animation project. Technical difficulties with the new Windows (Live) Movie maker.

Posted by peeterjoot on December 30, 2010

Lance and Aurora, this time in collaboration with Rachel have produced a new stop motion animation (the volume is really quiet since they weren’t standing close enough to the mic whereever it is on the new computer). Some of the older creations are available here.

This is a fun sort of project for the kids, since they can produce it quickly, and always enjoy the end result. This time the post production was done with Windows Live Movie Maker instead of the old Windows Movie Maker (which doesn’t come with windows 7).

I think they’ve tried to make the user interface simpler, but if that’s the case, they’ve gone too far. Basic options like setting the frame rate aren’t available under any sort of global options that I can find. If you go to the edit tab immediately after importing the pictures, you can set the frame rate by changing the duration, but if you don’t like your pick there’s no option to change it globally, and you have to do it frame by frame. It ends up being faster just to close the project and start over.

I also seem to recall that you could record the sound directly in movie maker before, and now you have to use sound recorder and import it. I also can’t seem to find the timeline view in the new version of movie maker, so it’s hard to get the voice and the picture synced up. If somebody knows how to do this I’d be interested to know.

I also wasn’t able to hike up the volume for the attached sound. It was already at max and wouldn’t go louder on the volume slider for some reason.

Perhaps there’s a better free animation alternative to the new movie maker. Given that the kids seem to want to do this only once every couple years, it would have to be easy enough to use that lots of time investment to learn how to use it wouldn’t be required.

Posted in Incoherent ramblings | Tagged: , , | Leave a Comment »

Vector form of Julia fractal

Posted by peeterjoot on December 27, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

As outlined in [1], 2-D and N-D Julia fractals can be computed using the geometric product, instead of complex numbers. Explore a couple of details related to that here.

Guts

Fractal patterns like the mandelbrot and julia sets are typically using iterative computations in the complex plane. For the Julia set, our iteration has the form

\begin{aligned}Z \rightarrow Z^p + C\end{aligned} \hspace{\stretch{1}}(2.1)

where p is an integer constant, and Z, and C are complex numbers. For p=2 I believe we obtain the Mandelbrot set. Given the isomorphism between complex numbers and vectors using the geometric product, we can use write

\begin{aligned}Z &= \mathbf{x} \hat{\mathbf{n}} \\ C &= \mathbf{c} \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.2)

and reexpress the Julia iterator as

\begin{aligned}\mathbf{x} \rightarrow (\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} + \mathbf{c}\end{aligned} \hspace{\stretch{1}}(2.4)

It’s not obvious that the RHS of this equation is a vector and not a multivector, especially when the vector \mathbf{x} lies in \mathbb{R}^{3} or higher dimensional space. To get a feel for this, let’s start by write this out in components for \hat{\mathbf{n}} = \mathbf{e}_1 and p=2. We obtain for the product term

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \hat{\mathbf{n}} \hat{\mathbf{n}} \\ &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \\ &= (x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2  )\mathbf{e}_1(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2  ) \\ &= (x_1 + x_2 \mathbf{e}_2 \mathbf{e}_1 )(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2  ) \\ &= (x_1^2 - x_2^2 ) \mathbf{e}_1 + 2 x_1 x_2 \mathbf{e}_2\end{aligned}

Looking at the same square in coordinate representation for the \mathbb{R}^{n} case (using summation notation unless otherwise specified), we have

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &= x_k \mathbf{e}_k \mathbf{e}_1x_m \mathbf{e}_m  \\ &= \left(x_1 + \sum_{k>1} x_k \mathbf{e}_k \mathbf{e}_1\right)x_m \mathbf{e}_m  \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_1 \mathbf{e}_k +\sum_{k>1,m>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= \left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k +\sum_{1 < k < m, 1 < m < k} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ \end{aligned}

This last term is zero since \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m = -\mathbf{e}_m \mathbf{e}_1 \mathbf{e}_k, and we are left with

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} =\left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(2.5)

a vector, even for non-planar vectors. How about for an arbitrary orientation of the unit vector in \mathbb{R}^{n}? For that we get

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &=(\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} ) \hat{\mathbf{n}} \mathbf{x}  \\ &=(\mathbf{x} \cdot \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} ) (\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} )   \\ &=((\mathbf{x} \cdot \hat{\mathbf{n}})^2 + (\mathbf{x} \wedge \hat{\mathbf{n}})^2) \hat{\mathbf{n}}+ 2 (\mathbf{x} \cdot \hat{\mathbf{n}}) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}\end{aligned}

We can read 2.5 off of this result by inspection for the \hat{\mathbf{n}} = \mathbf{e}_1 case.

It is now straightforward to show that the product (\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} is a vector for integer p \ge 2. We’ve covered the p=2 case, justifying an assumption that this product has the following form

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p-1} \hat{\mathbf{n}} = a \hat{\mathbf{n}} + b (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.6)

for scalars a and b. The induction test becomes

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p} \hat{\mathbf{n}} &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} (\mathbf{x} \hat{\mathbf{n}}) \hat{\mathbf{n}} \\ &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} \mathbf{x} \\ &= (a + b (\mathbf{x} \wedge \hat{\mathbf{n}}) ) ((\mathbf{x} \cdot \hat{\mathbf{n}} )\hat{\mathbf{n}} + (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}) \\ &= ( a(\mathbf{x} \cdot \hat{\mathbf{n}} )^2 - b (\mathbf{x} \wedge \hat{\mathbf{n}})^2 ) \hat{\mathbf{n}}+ ( a + b(\mathbf{x} \cdot \hat{\mathbf{n}} ) ) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}.\end{aligned}

Again we have a vector split nicely into projective and rejective components, so for any integer power of p our iterator 2.4 employing the geometric product is a mapping from vectors to vectors.

There is a striking image in the text of such a Julia set for such a 3D iterator, and an exersize left for the adventurous reader to attempt to code that based on the 2D p=2 sample code they provide.

References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

More windows powershell play

Posted by peeterjoot on December 26, 2010

It’s been a while since I tried windows powershell. I have a new Windows 7 computer now, and it seemed like a good opportunity to try it again. This shell is very different than the old windows shell, and it takes getting used to. My first task today was to figure out how to modify, or even see an environment variable. It turns out that this has a unix like syntax, so for example, I’m able to do the following:

echo $env:PATH
echo $env:HOMEPATH

Okay, perhaps I can make a unix like ‘cd’ alias that takes me straight to my %HOMEPATH%. I try the following (long winded) alias like command:

new-item -path alias:homedir -value 'cd $env:homepath'

Does it work?

PS C:\Users\Peeter\bin> homedir
Cannot resolve alias 'homedir' because it refers to term 'cd $env:homepath', which is not recognized as a cmdlet, funct
ion, operable program, or script file. Verify the term and try again.
At line:1 char:8

Nope. Perhaps I need chdir instead of cd. A bit of blundering and I find that I can remove my alias and try again with:

Remove-Item -path alias:homedir
new-item -path alias:homedir -value 'chdir $env:homepath'

But this behaves no better. I seem to recall that ‘cd’ and ‘chdir’ were in fact aliases in powershell, so I probably need the real applet names that these resolve to.

I recall that I’d made a note about how to save and restore aliases, and how to see what all the aliases were, but I get sidetracked, and wonder if I can make an aliases script, but I’m in a bit of a chicken and egg bind since I can’t start up a decent (ie: vim) editor. How do I modify my path?

It appears that I can do this also unix style with, in this case:

$env:PATH = "$env:PATH;C:\Program Files (x86)\Vim\vim73"

I don’t want to type this again, so it’s time to see if I can make a setenv.ps1 powershell script to perform this task. I produce such a file and get a message that scripts are disabled, with info on how to digitally sign scripts to allow execution, or how to change execution privledges to only block remote shell scripts. The command to do that appears to be:

Set-ExecutionPolicy RemoteSigned

but you have to start your powershell script in admin mode to do so. Under the powershell menu in the Start Menu (once I pin the command there), I have an admin mode prompt and am able to do so.

Rather painful, but I’m now able to create a basic powershell script, to import some useful environment settings. I found that this can be done in a multi-line fashion using back-quote as a line continuation character:

PS C:\Users\Peeter\bin> type .\setenv.ps1
$env:PATH = "$env:PATH" `
+ ";C:\Program Files (x86)\Vim\vim73" `
+ ";C:\cygwin\bin" `
+ ";C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin\x64"

Now that I can run a script, here’s my aliases script:

get-item -path alias:* | where-object {$_.Definition -eq "Get-Childitem"}

I see that I have an alias for Get-Childitem, and can in fact use that to see my aliases:

PS C:\Users\Peeter\bin> gci alias:* | grep chdir
Alias           chdir                                               Set-Location

It also appears that I can use alias for this. Now can I make an alias that changes my home directory?

If I put this in a .ps1 script, and try to execute it, it appears that the scope of the new-item is restricted to the execution of the .ps1 script, as the following illustrates:

PS C:\Users\Peeter\bin> type .\alias1.ps1
new-item -path alias:homedir -value 'Set-Location $env:homepath'
PS C:\Users\Peeter\bin> .\alias1.ps1

CommandType     Name                                                Definition
-----------     ----                                                ----------
Alias           homedir                                             Set-Location $env:homepath


PS C:\Users\Peeter\bin> homedir
The term 'homedir' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spe
lling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:8
+ homedir <<< get-item -path alias:homedir
Get-Item : Cannot find path 'Alias:\homedir' because it does not exist.
At line:1 char:9
+ get-item <<<<  -path alias:homedir
    + CategoryInfo          : ObjectNotFound: (Alias:\homedir:String) [Get-Item], ItemNotFoundException
    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand

But, if I do it on the command line directly, I still have no luck:

PS C:\Users\Peeter\bin> new-item -path alias:homedir -value 'Set-Location $env:homepath'

CommandType     Name                                                Definition
-----------     ----                                                ----------
Alias           homedir                                             Set-Location $env:homepath


PS C:\Users\Peeter\bin> homedir
Cannot resolve alias 'homedir' because it refers to term 'Set-Location $env:homepath', which is not recognized as a cmd
let, function, operable program, or script file. Verify the term and try again.
At line:1 char:8
+ homedir <<<<
    + CategoryInfo          : ObjectNotFound: (homedir:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : AliasNotResolvedException

Perhaps this is a quoting issue? Maybe no quotes, or double quotes? No quotes doesn’t work:

PS C:\Users\Peeter\bin> gci alias:homedir

CommandType     Name                                                Definition
-----------     ----                                                ----------
Alias           homedir                                             Set-Location $env:homepath


PS C:\Users\Peeter\bin> remove-item -path alias:homedir
PS C:\Users\Peeter\bin> new-item -path alias:homedir -value Set-Location $env:HOMEPATH
New-Item : A positional parameter cannot be found that accepts argument '\Users\Peeter'.
At line:1 char:9
+ new-item <<<<  -path alias:homedir -value Set-Location $env:HOMEPATH
    + CategoryInfo          : InvalidArgument: (:) [New-Item], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.NewItemCommand

How about double quotes?

PS C:\Users\Peeter\bin> new-item -path alias:homedir -value "Set-Location $env:homepath"

CommandType     Name                                                Definition
-----------     ----                                                ----------
Alias           homedir                                             Set-Location \Users\Peeter


PS C:\Users\Peeter\bin> homedir
Cannot resolve alias 'homedir' because it refers to term 'Set-Location \Users\Peeter', which is not recognized as a cmd
let, function, operable program, or script file. Verify the term and try again.
At line:1 char:8
+ homedir <<<<
    + CategoryInfo          : ObjectNotFound: (homedir:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : AliasNotResolvedException

Nope.

Perhaps I need multiple parameter aliases have to be specified differently?

The help alias powershell info shows a similar example, but they use a function to do it. Let’s try that:

PS C:\Users\Peeter\bin> function homedir {set-location -path $env:HOMEPATH}
PS C:\Users\Peeter\bin> homedir
PS C:\Users\Peeter> 

Okay. I can live with that. Use functions instead of aliases. I do that in bash too sometimes, and it’s not too much of an imposition to do that for simple “aliases” in powershell. Now I just want an easy way of repeating this, and see that this is possible with a powershell profile script.

The $profile variable had the path to the script:

PS C:\Users\Peeter\bin> echo $profile
C:\Users\Peeter\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

I’d seen this earlier when I was figuring out how to change environment variables. The variable is set, although the path (including the directory leading up to it) didn’t actually exist. Once I created that directory, and the file, it also didn’t execute because of the permissions issue. Now that I’ve set permissions to only exclude Remote scripts, it does work, and I can put my “alias” there.

Wow, that sidetracked me a bit! I started out with the intention to try to get an opengl sample program compiling with the Windows SDK compiler, and it appears that I cannot even do that in a powershell environment. There’s no equivalent to SetEnv.Cmd for powershell in the Windows SDK!

Posted in Development environment | Tagged: , , , , , , , , , , , , | 4 Comments »

Just Energy Canada nasty business practices.

Posted by peeterjoot on December 23, 2010

I’ve just had the unfortunate experience of dealing with the Just-Energy company. Years ago, after turning down the endless stream of pushy door to door energy salespeople, I had on the advise of my father in law I signed up with one of these fixed price gas distributors, specifically, Just Energy. I was never comfortable with that decision after the fact since I expected some sort of minimal exposition of the rates that I was paying vs. the utility rates, to validate that I was paying less on average, or if I wasn’t, at least inform me of the fact so that I could make an informed decision about renewal.

Because of the opacity of this company’s dealings I had no intention of renewing. I didn’t know that my (now) ex-wife had submitted to one of the same pushy door to door sales people, and had signed on for renewal. I am guilty of not looking at my gas bills closely enough and didn’t even notice the fact that the gas distributor on the Enbridge bill had not reverted to just plain old Enbridge at floating rates.

That renewal was in 2007, and obligated us to a fixed rate contract until 2013. One thing that the pushy door to door salespeople never tell you is that they impose humongous exit fees for their contract, and that the fine print in the contract also requires the contract to migrate with you if you move. In our case, both of us moved, and we were never contacted about any sort of continuance obligation. Mail forwarding for both of us was working fine, but there was never any attempt for Just Energy to contact us. Instead what they do is try to intimidate you into the continuance by immediately sending collections agency’s after you. That is a very underhanded, almost mobster, approach in my opinion. They claim to have sent a letter before the collections agency, but I don’t actually believe them. It would be interesting to talk confidentially to one of their employees to see what their internal policies actually are. I’d not be surprised at all, if collections agency before contact, is part of their standard operating procedure.

So it was a rather rude surprise when my ex-wife called me rather perturbed saying that their was a collections agency after her for a just energy bill and threatened court action on non-payment for a sum of about $450! This is the way that this company appears to work to force you into contract continuance on move. Calling in about this I got nothing but a runaround, over and over again. They claimed to have my ex-wife’s signature on file for the renewal, and it took over a month and a half of calling to try to get a verification that this was in fact the case. I’d asked them to email me and my ex a copy of this document, and they said they could only surface mail it. I never did receive anything from them, but my ex did. She also received more collections notices. They are very fast sending those and have absolutely no trouble doing so.

Eventually, they admitted that they could email this, perhaps after I’d been on the phone with them for at least an hour and a half over all the various repeated attempts to see if I really did have some sort of contractual obligation to continue dealing with them. For some reason it took them over two weeks to produce this email. In that interval, they had no problem sending their collections arm after my ex once again.

I had no intention of dealing with their fixed term contract business if they did provide a copy of a valid contract, and my ex and I decided to split the cost of the contract termination fee. That exorbitant $450 split two ways would probably have been cheaper than continuing to deal with them further. One of the phone support people (perhaps the third one I had I talked to) let it slip that I could get out of the fixed term contract by assuming the contract at a variable rate. The variable rate that he quoted was what was on my gas bill, so I’m hoping that they do not inflate that too. They did verify in the end that I’d paid more with them, than I would have had I not been on a fixed term contract, something that is in sync with other information that you can easily find about others experiences with this company.

Because of the gouging prices they impose for contract termination, and the availability of the unadvertised variable rate, I did end up assuming this last bit of old marital baggage without a fixed price. Once this contract expires I’ll never ever ever have anything to do with this company again and will not hesitate in supplying the same advice to anybody. Their logo should probably be beside unethical in the dictionary.

Posted in Incoherent ramblings | Tagged: , , , | 42 Comments »

Harmonic Oscillator position and momentum Hamiltonian operators

Posted by peeterjoot on December 18, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Hamiltonian problem from Chapter 9 of [1].

Problem 1.

Statement.

Assume x(t) and p(t) to be Heisenberg operators with x(0) = x_0 and p(0) = p_0. For a Hamiltonian corresponding to the harmonic oscillator show that

\begin{aligned}x(t) &= x_0 \cos \omega t + \frac{p_0}{m \omega} \sin \omega t \\ p(t) &= p_0 \cos \omega t - m \omega x_0 \sin \omega t.\end{aligned} \hspace{\stretch{1}}(3.1)

Solution.

Recall that the Hamiltonian operators were defined by factoring out the time evolution from a set of states

\begin{aligned}{\langle {\alpha(t) } \rvert} A {\lvert { \beta(t) } \rangle}={\langle {\alpha(0) } \rvert} e^{i H t/\hbar} A e^{-i H t/\hbar} {\lvert { \beta(0) } \rangle}.\end{aligned} \hspace{\stretch{1}}(3.3)

So one way to complete the task is to compute these exponential sandwiches. Recall from the appendix of chapter 10, that we have

\begin{aligned}e^A B e^{-A}= B + \left[{A},{B}\right]+ \frac{1}{{2!}} \left[{A},{\left[{A},{B}\right]}\right] + \cdots\end{aligned} \hspace{\stretch{1}}(3.4)

Perhaps there is also some smarter way to do this, but lets first try the obvious way.

Let’s summarize the variables we will work with

\begin{aligned}\alpha &= \sqrt{\frac{m \omega}{\hbar}} \\ X &= \frac{1}{{\alpha \sqrt{2}}} ( a + a^\dagger ) \\ P &= -i \hbar \frac{\alpha}{\sqrt{2}} ( a - a^\dagger ) \\ H &= \hbar \omega ( a^\dagger a + 1/2 ) \\ \left[{a},{a^\dagger}\right] &= 1 \end{aligned} \hspace{\stretch{1}}(3.5)

The operator in the exponential sandwich is

\begin{aligned}A = i H t/\hbar = i \omega t ( a^\dagger a + 1/2 )\end{aligned} \hspace{\stretch{1}}(3.10)

Note that the constant 1/2 factor will commute with all operators, which reduces the computation required

\begin{aligned}\antisymmetric{i H t/\hbar} {B } = (i\omega t) \left[{a^\dagger a},{B}\right]\end{aligned} \hspace{\stretch{1}}(3.11)

For B = X, or B = P, we’ll want some intermediate results

\begin{aligned}\left[{a^\dagger a},{a}\right]&=a^\dagger a a - a a^\dagger a \\  &=a^\dagger a a - (a^\dagger a + 1) a \\  &=-a,\end{aligned}

and

\begin{aligned}\left[{a^\dagger a},{a^\dagger}\right]&=a^\dagger a a^\dagger - a^\dagger a^\dagger a \\  &=a^\dagger a a^\dagger - a^\dagger (a a^\dagger -1) \\  &=a^\dagger\end{aligned}

Using these we can evaluate the commutators for the position and momentum operators. For position we have

\begin{aligned}\left[{i H t /\hbar },{X}\right] &= (i \omega t) \frac{1}{{\alpha \sqrt{2}}} \left[{a^\dagger a},{a+ a^\dagger}\right] \\ &= (i \omega t) \frac{1}{{\alpha \sqrt{2}}} (-a + a^\dagger ) \\ &= \frac{\omega t}{\alpha^2} \frac{-i \hbar \alpha}{ \sqrt{2}} (a - a^\dagger ).\end{aligned}

Since \alpha^2 \hbar = m \omega, we have

\begin{aligned}\left[{i H t /\hbar },{X}\right] = (\omega t) \frac{P}{m \omega }.\end{aligned} \hspace{\stretch{1}}(3.12)

For the momentum operator we have

\begin{aligned}\left[{i H t /\hbar },{P}\right] &= (i \omega t) \frac{-i \hbar \alpha}{ \sqrt{2}} \left[{a^\dagger a},{a- a^\dagger}\right] \\ &= (i \omega t) \frac{i \hbar \alpha}{ \sqrt{2}} (a + a^\dagger) \\ &= (\omega t) (\hbar \alpha^2) X\end{aligned}

So we have

\begin{aligned}\left[{i H t /\hbar },{P}\right] = (-\omega t) (m \omega ) X\end{aligned} \hspace{\stretch{1}}(3.13)

The expansion of the exponential series of nested commutators can now be written down by inspection and we get

\begin{aligned}X_H = X + (\omega t) \frac{P}{m \omega} - \frac{(\omega t)^2}{2!} X - \frac{(\omega t)^3}{3!} \frac{P}{m \omega} + \cdots\end{aligned} \hspace{\stretch{1}}(3.14)

\begin{aligned}P_H = P - (\omega t) (m \omega)X - \frac{(\omega t)^2}{2!} P + \frac{(\omega t)^3}{3!} (m \omega)X + \cdots\end{aligned} \hspace{\stretch{1}}(3.15)

Collection of terms gives us the desired answer

\begin{aligned}X_H = X \cos(\omega t) + \frac{P}{m \omega} \sin(\omega t)\end{aligned} \hspace{\stretch{1}}(3.16)

\begin{aligned}P_H = P \cos(\omega t) - (m \omega) X \sin(\omega t)\end{aligned} \hspace{\stretch{1}}(3.17)

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

Notes for Desai Chapter 26

Posted by peeterjoot on December 9, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter 26 notes for [1].

Guts

Trig relations.

To verify equations 26.3-5 in the text it’s worth noting that

\begin{aligned}\cos(a + b) &= \Re( e^{ia} e^{ib} ) \\ &= \Re( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \cos b - \sin a \sin b\end{aligned}

and

\begin{aligned}\sin(a + b) &= \Im( e^{ia} e^{ib} ) \\ &= \Im( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \sin b + \sin a \cos b\end{aligned}

So, for

\begin{aligned}x &= \rho \cos\alpha \\ y &= \rho \sin\alpha \end{aligned} \hspace{\stretch{1}}(2.1)

the transformed coordinates are

\begin{aligned}x' &= \rho \cos(\alpha + \phi) \\ &= \rho (\cos \alpha \cos \phi - \sin \alpha \sin \phi) \\ &= x \cos \phi - y \sin \phi\end{aligned}

and

\begin{aligned}y' &= \rho \sin(\alpha + \phi) \\ &= \rho (\cos \alpha \sin \phi + \sin \alpha \cos \phi) \\ &= x \sin \phi + y \cos \phi \\ \end{aligned}

This allows us to read off the rotation matrix. Without all the messy trig, we can also derive this matrix with geometric algebra.

\begin{aligned}\mathbf{v}' &= e^{- \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \mathbf{v} e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) (\cos \phi + \mathbf{e}_1 \mathbf{e}_2 \sin\phi) \\ &= v_3 \mathbf{e}_3 + \mathbf{e}_1 (v_1 \cos\phi - v_2 \sin\phi)+ \mathbf{e}_2 (v_2 \cos\phi + v_1 \sin\phi)\end{aligned}

Here we use the Pauli-matrix like identities

\begin{aligned}\mathbf{e}_k^2 &= 1 \\ \mathbf{e}_i \mathbf{e}_j &= -\mathbf{e}_j \mathbf{e}_i,\quad i\ne j\end{aligned} \hspace{\stretch{1}}(2.3)

and also note that \mathbf{e}_3 commutes with the bivector for the x,y plane \mathbf{e}_1 \mathbf{e}_2. We can also read off the rotation matrix from this.

Infinitesimal transformations.

Recall that in the problems of Chapter 5, one representation of spin one matrices were calculated [2]. Since the choice of the basis vectors was arbitrary in that exersize, we ended up with a different representation. For S_x, S_y, S_z as found in (26.20) and (26.23) we can also verify easily that we have eigenvalues 0, \pm \hbar. We can also show that our spin kets in this non-diagonal representation have the following column matrix representations:

\begin{aligned}{\lvert {1,\pm 1} \rangle}_x &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}0 \\ 1 \\ \pm i\end{bmatrix} \\ {\lvert {1,0} \rangle}_x &=\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_y &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}\pm i \\ 0 \\ 1 \end{bmatrix} \\ {\lvert {1,0} \rangle}_y &=\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_z &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}1 \\ \pm i \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle}_z &=\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(2.5)

Verifying the commutator relations.

Given the (summation convention) matrix representation for the spin one operators

\begin{aligned}(S_i)_{jk} = - i \hbar \epsilon_{ijk},\end{aligned} \hspace{\stretch{1}}(2.11)

let’s demonstrate the commutator relation of (26.25).

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=(S_i S_j - S_j S_i)_{rs} \\ &=\sum_t (S_i)_{rt} (S_j)_{ts} - (S_j)_{rt} (S_i)_{ts} \\ &=(-i\hbar)^2 \sum_t \epsilon_{irt} \epsilon_{jts} - \epsilon_{jrt} \epsilon_{its} \\ &=-(-i\hbar)^2 \sum_t \epsilon_{tir} \epsilon_{tjs} - \epsilon_{tjr} \epsilon_{tis} \\ \end{aligned}

Now we can employ the summation rule for sums products of antisymmetic tensors over one free index (4.179)

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab}= \delta_{ja}\delta_{kb}-\delta_{jb}\delta_{ka}.\end{aligned} \hspace{\stretch{1}}(2.12)

Continuing we get

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=-(-i\hbar)^2 \left(\delta_{ij}\delta_{rs}-\delta_{is}\delta_{rj}-\delta_{ji}\delta_{rs}+\delta_{js}\delta_{ri} \right) \\ &=(-i\hbar)^2 \left( \delta_{is}\delta_{jr}-\delta_{ir} \delta_{js}\right)\\ &=(-i\hbar)^2 \sum_t \epsilon_{tij} \epsilon_{tsr}\\ &=i\hbar \sum_t \epsilon_{tij} (S_t)_{rs}\qquad\square\end{aligned}

General infinitesimal rotation.

Equation (26.26) has for an infinitesimal rotation counterclockwise around the unit axis of rotation vector \mathbf{n}

\begin{aligned}\mathbf{V}' = \mathbf{V} + \epsilon \mathbf{n} \times \mathbf{V}.\end{aligned} \hspace{\stretch{1}}(2.13)

Let’s derive this using the geometric algebra rotation expression for the same

\begin{aligned}\mathbf{V}' &=e^{-I\mathbf{n} \alpha/2}\mathbf{V} e^{I\mathbf{n} \alpha/2} \\ &=e^{-I\mathbf{n} \alpha/2}\left((\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)e^{I\mathbf{n} \alpha/2} \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\Bne^{I\mathbf{n} \alpha}\end{aligned}

We note that I\mathbf{n} and thus the exponential commutes with \mathbf{n}, and the projection component in the normal direction. Similarily I\mathbf{n} anticommutes with (\mathbf{V} \wedge \mathbf{n}) \mathbf{n}. This leaves us with

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}\left(+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)( \cos \alpha + I \mathbf{n} \sin\alpha)\end{aligned}

For \alpha = \epsilon \rightarrow 0, this is

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}( 1 + I \mathbf{n} \epsilon) \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n} +(\mathbf{V} \wedge \mathbf{n})\mathbf{n}+\epsilon I^2(\mathbf{V} \times \mathbf{n})\mathbf{n}^2 \\ &=\mathbf{V}+ \epsilon (\mathbf{n} \times \mathbf{V}) \qquad\square\end{aligned}

Position and angular momentum commutator.

Equation (26.71) is

\begin{aligned}\left[{x_i},{L_j}\right] = i \hbar \epsilon_{ijk} x_k.\end{aligned} \hspace{\stretch{1}}(2.14)

Let’s derive this. Recall that we have for the position-momentum commutator

\begin{aligned}\left[{x_i},{p_j}\right] = i \hbar \delta_{ij},\end{aligned} \hspace{\stretch{1}}(2.15)

and for each of the angular momentum operator components we have

\begin{aligned}L_m = \epsilon_{mab} x_a p_b.\end{aligned} \hspace{\stretch{1}}(2.16)

The commutator of interest is thus

\begin{aligned}\left[{x_i},{L_j}\right] &= x_i \epsilon_{jab} x_a p_b -\epsilon_{jab} x_a p_b x_i \\ &= \epsilon_{jab} x_a\left(x_i p_b -p_b x_i \right) \\ &=\epsilon_{jab} x_ai \hbar \delta_{ib} \\ &=i \hbar \epsilon_{jai} x_a \\ &=i \hbar \epsilon_{ija} x_a \qquad\square\end{aligned}

A note on the angular momentum operator exponential sandwiches.

In (26.73-74) we have

\begin{aligned}e^{i \epsilon L_z/\hbar} x e^{-i \epsilon L_z/\hbar} = x + \frac{i \epsilon}{\hbar} \left[{L_z},{x}\right]\end{aligned} \hspace{\stretch{1}}(2.17)

Observe that

\begin{aligned}\left[{x},{\left[{L_z},{x}\right]}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.18)

so from the first two terms of (10.99)

\begin{aligned}e^{A} B e^{-A}= B + \left[{A},{B}\right]+\frac{1}{{2}} \left[{A},{\left[{A},{B}\right]}\right] \cdots\end{aligned} \hspace{\stretch{1}}(2.19)

we get the desired result.

Trace relation to the determinant.

Going from (26.90) to (26.91) we appear to have a mystery identity

\begin{aligned}\det \left( \mathbf{1} + \mu \mathbf{A} \right) = 1 + \mu \text{Tr} \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.20)

According to wikipedia, under derivative of a determinant, [3], this is good for small \mu, and related to something called the Jacobi identity. Someday I should really get around to studying determinants in depth, and will take this one for granted for now.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Peeter Joot. Notes and problems for Desai Chapter V. [online]. http://sites.google.com/site/peeterjoot/math2010/desaiCh5.pdf.

[3] Wikipedia. Determinant — wikipedia, the free encyclopedia [online]. 2010. [Online; accessed 10-December-2010]. http://en.wikipedia.org/w/index.php?title=Determinant&oldid=400983667.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 5.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted after seeing the comments from the grading. [Click here for the PDF for the original submission, as found below, uncorrected.

Problem.

Statement

A particle of mass m moves along the x-direction such that V(X)=\frac{1}{{2}}KX^2. Is the state

\begin{aligned}u(\xi) = B \xi e^{+\xi^2/2},\end{aligned} \hspace{\stretch{1}}(2.1)

where \xi is given by Eq. (9.60), B is a constant, and time t=0, an energy eigenstate of the system? What is probability per unit length for measuring the particle at position x=0 at t=t_0>0? Explain the physical meaning of the above results.

Solution

Is this state an energy eigenstate?

Recall that \xi = \alpha x, \alpha = \sqrt{m\omega/\hbar}, and K = m \omega^2. With this variable substitution Schr\”{o}dinger’s equation for this harmonic oscillator potential takes the form

\begin{aligned}\frac{d^2 u}{d\xi^2} - \xi^2 u = \frac{2 E }{\hbar\omega} u\end{aligned} \hspace{\stretch{1}}(2.2)

While we can blindly substitute a function of the form \xi e^{\xi^2/2} into this to get

\begin{aligned}\frac{1}{{B}} \left(\frac{d^2 u}{d\xi^2} - \xi^2 u\right)&=\frac{d}{d\xi} \left( 1 + \xi^2 \right) e^{\xi^2/2} - \xi^3 e^{\xi^2/2} \\ &=\left( 2 \xi + \xi + \xi^3 \right) e^{\xi^2/2} - \xi^3 e^{\xi^2/2} \\ &=3 \xi e^{\xi^2/2}\end{aligned}

and formally make the identification E = 3 \omega \hbar/2 = (1 + 1/2) \omega \hbar, this isn’t a normalizable wavefunction, and has no physical relevance, unless we set B = 0.

By changing the problem, this state could be physically relevant. We’d require a potential of the form

\begin{aligned}V(x) =\left\{\begin{array}{l l}f(x) & \quad \mbox{if latex x < a$} \\ \frac{1}{{2}} K x^2 & \quad \mbox{if $latex a < x b$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

For example, f(x) = V_1, g(x) = V_2, for constant V_1, V_2. For such a potential, within the harmonic well, a general solution of the form

\begin{aligned}u(x,t) = \sum_n H_n(\xi) \Bigl(A_n e^{-\xi^2/2} + B_n e^{\xi^2/2} \Bigr) e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.4)

is possible since normalization would not prohibit non-zero B_n values in that situation. For the wave function to be a physically relevant, we require it to be (absolute) square integrable, and must also integrate to unity over the entire interval.

Probability per unit length at x=0.

We cannot answer the question for the probability that the particle is found at the specific x=0 position at t=t_0 (that probability is zero in a continuous space), but we can answer the question for the probability that a particle is found in an interval surrounding a specific point at this time. By calculating the average of the probability to find the particle in an interval, and dividing by that interval’s length, we arrive at plausible definition of probability per unit length for an interval surrounding x = x_0

\begin{aligned}P = \text{Probability per unit length near latex x = x_0$} =\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 – \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.5)$

By this definition, the probability per unit length is just the probability density itself, evaluated at the point of interest.

Physically, for an interval small enough that the probability density is constant in magnitude over that interval, this probability per unit length times the length of this small interval, represents the probability that we will find the particle in that interval.

Probability per unit length for the non-normalizable state given.

It seems possible, albeit odd, that this question is asking for the probability per unit length for the non-normalizable E_1 wavefunction 2.1. Since normalization requires B=0, that probability density is simply zero (or undefined, depending on one’s point of view).

Probability per unit length for some more interesting harmonic oscillator states.

Suppose we form the wavefunction for a superposition of all the normalizable states

\begin{aligned}u(x,t) = \sum_n A_n H_n(\xi) e^{-\xi^2/2} e^{-i E_n t/\hbar}\end{aligned} \hspace{\stretch{1}}(2.6)

Here it is assumed that the A_n coefficients yield unit probability

\begin{aligned}\int {\left\lvert{u(x,0)}\right\rvert}^2 dx = \sum_n {\left\lvert{A_n}\right\rvert}^2 = 1\end{aligned} \hspace{\stretch{1}}(2.7)

For the impure state of 2.6 we have for the probability density

\begin{aligned}{\left\lvert{u}\right\rvert}^2&=\sum_{m,n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m \ne n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m \ne n}A_n A_m^{*} H_n(\xi) H_m(\xi) e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar} \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+\sum_{m < n}H_n(\xi) H_m(\xi)\left(A_n A_m^{*}e^{-\xi^2} e^{-i (E_n - E_m)t_0/\hbar}+A_m A_n^{*}e^{-\xi^2} e^{-i (E_m - E_n)t_0/\hbar}\right) \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}+2 \sum_{m < n}H_n(\xi) H_m(\xi)e^{-\xi^2}\Re \left(A_n A_m^{*}e^{-i (E_n - E_m)t_0/\hbar}\right) \\ &=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(\xi))^2 e^{-\xi^2}  \\ &\quad+2 \sum_{m < n}H_n(\xi) H_m(\xi)e^{-\xi^2}\left(\Re ( A_n A_m^{*} ) \cos( (n - m)\omega t_0)+\Im ( A_n A_m^{*} ) \sin( (n - m)\omega t_0)\right) \\ \end{aligned}

Evaluation at the point x = 0, we have

\begin{aligned}{\left\lvert{u(0,t_0)}\right\rvert}^2=\sum_n{\left\lvert{A_n}\right\rvert}^2 (H_n(0))^2 +2 \sum_{m < n} H_n(0) H_m(0) \left( \Re ( A_n A_m^{*} ) \cos( (n - m)\omega t_0) +\Im ( A_n A_m^{*} ) \sin( (n - m)\omega t_0)\right)\end{aligned} \hspace{\stretch{1}}(2.8)

It is interesting that the probability per unit length only has time dependence for a mixed state.

For a pure state and its wavefunction u(x,t) = N_n H_n(\xi) e^{-\xi^2/2} e^{-i E_n t/\hbar} we have just

\begin{aligned}{\left\lvert{u(0,t_0)}\right\rvert}^2=N_n^2 (H_n(0))^2 = \frac{\alpha}{\sqrt{\pi} 2^n n!} H_n(0)^2\end{aligned} \hspace{\stretch{1}}(2.9)

This is zero for odd n. For even n is appears that (H_n(0))^2 may equal 2^n (this is true at least up to n=4). If that’s the case, we have for non-mixed states, with even numbered energy quantum numbers, at x=0 a probability per unit length value of {\left\lvert{u(0,t_0)}\right\rvert}^2 = \frac{\alpha}{\sqrt{\pi} n!}.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 4.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted with some grading commentary. [Click here for the PDF for the original submission, as found below.

Problem 1.

Statement

Is it possible to derive the eigenvalues and eigenvectors presented in Section 8.2 from those in Section 8.1.2? What does this say about the potential energy operator in these two situations?

For reference 8.1.2 was a finite potential barrier, V(x) = V_0, {\left\lvert{x}\right\rvert} > a, and zero in the interior of the well. This had trigonometric solutions in the interior, and died off exponentially past the boundary of the well.

On the other hand, 8.2 was a delta function potential V(x) = -g \delta(x), which had the solution u(x) = \sqrt{\beta} e^{-\beta {\left\lvert{x}\right\rvert}}, where \beta = m g/\hbar^2.

Solution

The pair of figures in the text [1] for these potentials doesn’t make it clear that there are possibly any similarities. The attractive delta function potential isn’t illustrated (although the delta function is, but with opposite sign), and the scaling and the reference energy levels are different. Let’s illustrate these using the same reference energy level and sign conventions to make the similarities more obvious.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{FiniteWellPotential}
\caption{8.1.2 Finite Well potential (with energy shifted downwards by V_0)}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{deltaFunctionPotential}
\caption{8.2 Delta function potential.}
\end{figure}

The physics isn’t changed by picking a different point for the reference energy level, so let’s compare the two potentials, and their solutions using V(x) = 0 outside of the well for both cases. The method used to solve the finite well problem in the text is hard to follow, so re-doing this from scratch in a slightly tidier way doesn’t hurt.

Schr\”{o}dinger’s equation for the finite well, in the {\left\lvert{x}\right\rvert} > a region is

\begin{aligned}-\frac{\hbar^2}{2m} u'' = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.1)

where a positive bound state energy E_B = -E > 0 has been introduced.

Writing

\begin{aligned}\beta = \sqrt{\frac{2 m E_B}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(2.2)

the wave functions outside of the well are

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} &\quad \mbox{latex x a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

Within the well Schr\”{o}dinger’s equation is

\begin{aligned}-\frac{\hbar^2}{2m} u'' - V_0 u = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.4)

or

\begin{aligned}\frac{\hbar^2}{2m} u'' = - \frac{2m}{\hbar^2} (V_0 - E_B) u,\end{aligned} \hspace{\stretch{1}}(2.5)

Noting that the bound state energies are the E_B < V_0 values, let \alpha^2 = 2m (V_0 - E_B)/\hbar^2, so that the solutions are of the form

\begin{aligned}u(x) = A e^{i\alpha x} + B e^{-i\alpha x}.\end{aligned} \hspace{\stretch{1}}(2.6)

As was done for the wave functions outside of the well, the normalization constants can be expressed in terms of the values of the wave functions on the boundary. That provides a pair of equations to solve

\begin{aligned}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}=\begin{bmatrix}e^{i \alpha a} & e^{-i \alpha a} \\ e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7)

Inverting this and substitution back into 2.6 yields

\begin{aligned}u(x) &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix} \\ &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\frac{1}{{e^{2 i \alpha a} - e^{-2 i \alpha a}}}\begin{bmatrix}e^{i \alpha a} & -e^{-i \alpha a} \\ -e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix} \\ &=\begin{bmatrix}\frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} &\frac{\sin(\alpha (a - x))}{\sin(2 \alpha a)}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}.\end{aligned}

Expanding the last of these matrix products the wave function is close to completely specified.

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} & \quad \mbox{latex x < -a$} \\ u(a) \frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} +u(-a) \frac{\sin(\alpha (a – x))}{\sin(2 \alpha a)} & \quad \mbox{$latex {\left\lvert{x}\right\rvert} a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.8)$

There are still two unspecified constants u(\pm a) and the constraints on E_B have not been determined (both \alpha and \beta are functions of that energy level). It should be possible to eliminate at least one of the u(\pm a) by computing the wavefunction normalization, and since the well is being narrowed the \alpha term will not be relevant. Since only the vanishingly narrow case where a \rightarrow 0, x \in [-a,a] is of interest, the wave function in that interval approaches

\begin{aligned}u(x) \rightarrow \frac{1}{{2}} (u(a) + u(-a)) + \frac{x}{2} ( u(a) - u(-a) ) \rightarrow \frac{1}{{2}} (u(a) + u(-a)).\end{aligned} \hspace{\stretch{1}}(2.9)

Since no discontinuity is expected this is just u(a) = u(-a). Let’s write \lim_{a\rightarrow 0} u(a) = A for short, and the limited width well wave function becomes

\begin{aligned}u(x) =\left\{\begin{array}{l l}A e^{\beta x} & \quad \mbox{latex x 0$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.10)$

This is now the same form as the delta function potential, and normalization also gives A = \sqrt{\beta}.

One task remains before the attractive delta function potential can be considered a limiting case for the finite well, since the relation between a, V_0, and g has not been established. To do so integrate the Schr\”{o}dinger equation over the infinitesimal range [-a,a]. This was done in the text for the delta function potential, and that provided the relation

\begin{aligned}\beta = \frac{mg}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.11)

For the finite well this is

\begin{aligned}\int_{-a}^a -\frac{\hbar^2}{2m} u'' - V_0 \int_{-a}^a u = -E_B \int_{-a}^a u \\ \end{aligned} \hspace{\stretch{1}}(2.12)

In the limit as a \rightarrow 0 this is

\begin{aligned}\frac{\hbar^2}{2m} (u'(a) - u'(-a)) + V_0 2 a u(0) = 2 E_B a u(0).\end{aligned} \hspace{\stretch{1}}(2.13)

Some care is required with the V_0 a term since a \rightarrow 0 as V_0 \rightarrow \infty, but the E_B term is unambiguously killed, leaving

\begin{aligned}\frac{\hbar^2}{2m} u(0) (-2\beta e^{-\beta a}) = -V_0 2 a u(0).\end{aligned} \hspace{\stretch{1}}(2.14)

The exponential vanishes in the limit and leaves

\begin{aligned}\beta = \frac{m (2 a) V_0}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.15)

Comparing to 2.11 from the attractive delta function completes the problem. The conclusion is that when the finite well is narrowed with a \rightarrow 0, also letting V_0 \rightarrow \infty such that the absolute area of the well g = (2 a) V_0 is maintained, the finite potential well produces exactly the attractive delta function wave function and associated bound state energy.

Problem 2.

Statement

For the hydrogen atom, determine {\langle {nlm} \rvert}(1/R){\lvert {nlm} \rangle} and 1/{\langle {nlm} \rvert}R{\lvert {nlm} \rangle} such that (nlm)=(211) and R is the radial position operator (X^2+Y^2+Z^2)^{1/2}. What do these quantities represent physically and are they the same?

Solution

Both of the computation tasks for the hydrogen like atom require expansion of a braket of the following form

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle},\end{aligned} \hspace{\stretch{1}}(3.16)

where A(R) = R = (X^2 + Y^2 + Z^2)^{1/2} or A(R) = 1/R.

The spherical representation of the identity resolution is required to convert this braket into integral form

\begin{aligned}\mathbf{1} = \int r^2 \sin\theta dr d\theta d\phi {\lvert { r \theta \phi} \rangle}{\langle { r \theta \phi} \rvert},\end{aligned} \hspace{\stretch{1}}(3.17)

where the spherical wave function is given by the braket \left\langle{{ r \theta \phi}} \vert {{nlm}}\right\rangle = R_{nl}(r) Y_{lm}(\theta,\phi).

Additionally, the radial form of the delta function will be required, which is

\begin{aligned}\delta(\mathbf{x} - \mathbf{x}') = \frac{1}{{r^2 \sin\theta}} \delta(r - r') \delta(\theta - \theta') \delta(\phi - \phi')\end{aligned} \hspace{\stretch{1}}(3.18)

Two applications of the identity operator to the braket yield

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &={\langle {nlm} \rvert} \mathbf{1} A(R) \mathbf{1} {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' \left\langle{{nlm}} \vert {{ r \theta \phi}}\right\rangle{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}\left\langle{{ r' \theta' \phi'}} \vert {{nlm}}\right\rangle \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi){\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

To continue an assumption about the matrix element {\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} is required. It seems reasonable that this would be

\begin{aligned}{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} = \\ \delta(\mathbf{x} - \mathbf{x}') A(r) = \frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r).\end{aligned} \hspace{\stretch{1}}(3.19)

The braket can now be written completely in integral form as

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ &=\int dr d\theta d\phi {r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

Application of the delta functions then reduces the integral, since the only \theta, and \phi dependence is in the (orthonormal) Y_{lm} terms they are found to drop out

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}&=\int dr d\theta d\phi r^2 \sin\theta R_{nl}(r) Y_{lm}^{*}(\theta, \phi)A(r)R_{nl}(r) Y_{lm}(\theta, \phi) \\ &=\int dr r^2 R_{nl}(r) A(r)R_{nl}(r) \underbrace{\int\sin\theta d\theta d\phi Y_{lm}^{*}(\theta, \phi)Y_{lm}(\theta, \phi) }_{=1}\\ \end{aligned}

This leaves just the radial wave functions in the integral

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}=\int dr r^2 R_{nl}^2(r) A(r)\end{aligned} \hspace{\stretch{1}}(3.20)

As a consistency check, observe that with A(r) = 1, this integral evaluates to 1 according to equation (8.274) in the text, so we can think of (r R_{nl}(r))^2 as the radial probability density for functions of r.

The problem asks specifically for these expectation values for the {\lvert {211} \rangle} state. For that state the radial wavefunction is found in (8.277) as

\begin{aligned}R_{21}(r) = \left(\frac{Z}{2a_0}\right)^{3/2} \frac{ Z r }{a_0 \sqrt{3}} e^{-Z r/2 a_0}\end{aligned} \hspace{\stretch{1}}(3.21)

The braket can now be written explicitly

\begin{aligned}{\langle {21m} \rvert} A(R) {\lvert {21m} \rangle}=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^4 e^{-Z r/ a_0}A(r)\end{aligned} \hspace{\stretch{1}}(3.22)

Now, let’s consider the two functions A(r) separately. First for A(r) = r we have

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^5 e^{-Z r/ a_0} \\ &=\frac{ a_0 }{ 24 Z } \int_0^\inftydu u^5 e^{-u} \\ \end{aligned}

The last integral evaluates to 120, leaving

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ 5 a_0 }{ Z }.\end{aligned} \hspace{\stretch{1}}(3.23)

The expectation value associated with this {\lvert {21m} \rangle} state for the radial position is found to be proportional to the Bohr radius. For the hydrogen atom where Z=1 this average value for repeated measurements of the physical quantity associated with the operator R is found to be 5 times the Bohr radius for n=2, l=1 states.

Our problem actually asks for the inverse of this expectation value, and for reference this is

\begin{aligned}1/ {\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ Z }{ 5 a_0 } \end{aligned} \hspace{\stretch{1}}(3.24)

Performing the same task for A(R) = 1/R

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^3e^{-Z r/ a_0} \\ &=\frac{1}{{24}} \frac{ Z }{ a_0 } \int_0^\inftydu u^3e^{-u}.\end{aligned}

This last integral has value 6, and we have the second part of the computational task complete

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = \frac{1}{{4}} \frac{ Z }{ a_0 } \end{aligned} \hspace{\stretch{1}}(3.25)

The question of whether or not 3.24, and 3.25 are equal is answered. They are not.

Still remaining for this problem is the question of the what these quantities represent physically.

The quantity {\langle {nlm} \rvert} R {\lvert {nlm} \rangle} is the expectation value for the radial position of the particle measured from the center of mass of the system. This is the average outcome for many measurements of this radial distance when the system is prepared in the state {\lvert {nlm} \rangle} prior to each measurement.

Interestingly, the physical quantity that we associate with the operator R has a different measurable value than the inverse of the expectation value for the inverted operator 1/R. Regardless, we have a physical (observable) quantity associated with the operator 1/R, and when the system is prepared in state {\lvert {21m} \rangle} prior to each measurement, the average outcome of many measurements of this physical quantity produces this value {\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = Z/n^2 a_0, a quantity inversely proportional to the Bohr radius.

ASIDE: Comparing to the general case.

As a confirmation of the results obtained, we can check 3.24, and 3.25 against the general form of the expectation values \left\langle{{R^s}}\right\rangle for various powers s of the radial position operator. These can be found in locations such as farside.ph.utexas.edu which gives for Z=1 (without proof), and in [2] (where these and harder looking ones expectation values are left as an exercise for the reader to prove). Both of those give:

\begin{aligned}\left\langle{{R}}\right\rangle &= \frac{a_0}{2} ( 3 n^2 -l (l+1) ) \\ \left\langle{{1/R}}\right\rangle &= \frac{1}{n^2 a_0} \end{aligned} \hspace{\stretch{1}}(3.26)

It is curious to me that the general expectation values noted in 3.26 we have a l quantum number dependence for \left\langle{{R}}\right\rangle, but only the n quantum number dependence for \left\langle{{1/R}}\right\rangle. It is not obvious to me why this would be the case.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

Desai Chapter 10 notes.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter 10 notes for [1].

Notes

In 10.3 (interaction with a electric field), Green’s functions are introduced to solve the first order differential equation

\begin{aligned}\frac{da}{dt} + i \omega_0 a = - i \omega_0 \lambda(t)\end{aligned} \hspace{\stretch{1}}(2.1)

A simpler way is to use the usual trick of assuming that we can take the constant term in the homogeneous solution and allow it to vary with time.

Since our homogeneous solution is of the form

\begin{aligned}a_H(t) = a_H(0) e^{-i\omega_0 t},\end{aligned} \hspace{\stretch{1}}(2.2)

we can look for a specific solution to the forcing term equation of the form

\begin{aligned}a_S(t) = f(t) e^{-i\omega_0 t}\end{aligned} \hspace{\stretch{1}}(2.3)

We get

\begin{aligned}f' = -i \omega_0 \lambda(t) e^{i \omega_0 t}\end{aligned} \hspace{\stretch{1}}(2.4)

which can be integrate directly to find the non-homogeneous solution

\begin{aligned}a_S(t) = a_S(t_0) e^{-i \omega_0 (t - t_0)} - i \omega_0 \int_{t_0}^t \lambda(t') e^{-i \omega_0 (t-t')} dt'\end{aligned} \hspace{\stretch{1}}(2.5)

Setting t_0 = -\infty, with a requirement that a_S(-\infty) = 0 and adding in a general homogeneous solution one then has 10.92 without the complications of Green’s functions or the associated contour integrals. I suppose the author wanted to introduce this as a general purpose tool and this was a simple way to do so.

His introduction of Green’s functions this way I didn’t personally find very clear. Specifically, he doesn’t actually define what a Green’s function is, and the Appendix 20.13 he refers to only discusses the subtlies of the associated Contour integration. I didn’t understand where equation 10.83 came from in the first place.

Something like the following would have been helpful (the type of argument found in [2])

Given a linear operator L, such that L u(x) = f(x), we search for the Green’s function G(x,s) such that L G(x,s) = \delta(x-s). For such a function we have

\begin{aligned}\int L G(x,s) f(s) ds &= \int \delta(x-s) f(s) ds \\ &= f(x)\end{aligned}

and by linearity we also have

\begin{aligned}f(x) &=\int L G(x,s) f(s) ds \\ &= L \int G(x,s) f(s) ds \\ \end{aligned}

and can therefore identify u(x) = \int G(x,s) f(s) ds as the desired solution to L u(x) = f(x) once the Green’s function G(x,s) associated with operator L has been determined.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Green’s function — wikipedia, the free encyclopedia [online]. 2010. [Online; accessed 20-November-2010]. http://en.wikipedia.org/w/index.php?title=Green%27s_function&oldid=3911%86019.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »