[Click here for a PDF of this post with nicer formatting]
Chapter II notes for .
Based on the canonical relationship , and , Desai determines the form of the operator in continuous space. A consequence of this is that the matrix element of the momentum operator is found to have a delta function specification
In particular the matrix element associated with the state is found to be
Compare this to , where this last is taken as the definition of the momentum operator, and the relationship to the delta function is not spelled out explicitly. This canonical commutator approach, while more abstract, seems to have less black magic involved in the setup. We do require the commutator relationship to be pulled out of a magic hat, but at least the magic show is a structured one based on a small set of core assumptions.
It will likely be good to come back to this later when trying to reconcile this new (for me) Dirac notation with the more basic notation I’m already comfortable with. When trying to compare the two, it will be good to note that there is a matrix element that is implied in the more old fashioned treatment in a book such as .
There is one fundamental assumption that appears to be made in this section that isn’t justified by anything except the end result. That is the assumption that is a derivative like operator, acting with a product rule action. That’s used to obtain (2.28) and is a fairly black magic operation. This same assumption, is also hiding, somewhat sneakily, in the manipulation for (2.44).
If one has to make that assumption that is a derivative like operator, I don’t feel this method of introducing it is any less arbitrary seeming. It is still pulled out of a magic hat, only because the answer is known ahead of time. The approach of , where the derivative nature is presented as consequence of transforming (via Fourier transforms) from the position to the momentum representation, seems much more intuitive and less arbitrary.
Generalized momentum commutator.
It is stated that
Let’s prove this. The case is the canonical commutator, which is assumed. Is there any good way to justify that from first principles, as presented in the text? We have to prove this for , given the relationship for . Expanding the th power commutator we have
Rearranging the result we have
and can insert that in our expansion for
The origin of the statement is not something that seemed obvious. Expanding this out however is straightforward, and clarifies things. That is
Size of a particle
I found it curious that using instead of , was sufficient to obtain the hydrogen ground state energy , without also having to do any factor of two fudging.
Space displacement operator.
I’d be curious to know if others find the loose use of equality for approximation after approximation slightly disturbing too?
I also find it curious that (2.140) is written
Is this intentional? It doesn’t seem like ought to be acting on in this case, so why order the terms that way?
Expanding the application of this operator, or at least its first order Taylor series, is helpful to get an idea about this. Doing so, with the original value used in the derivation of the text we have to start
This shows that the factor can be commuted with the momentum operator, as it is not a function of , so the question of , vs above appears to be a non-issue.
Regardless of that conclusion, it seems worthy to continue an attempt at expanding this shift operator action on the state vector. Let’s do so, but do so by computing the matrix element . That is
This is consistent with the text. It is interesting, and initially surprising that the space displacement operator when applied to a state vector introduces a negative shift in the wave function associated with that state vector. In the derivation of the text, this was associated with the use of integration by parts (ie: due to the sign change in that integration). Here we see it sneak back in, due to the once the momentum operator is expanded completely.
As last note and question. The first order Taylor approximation of the momentum operator was used. If the higher order terms are retained, as in
then how does one evaluate a squared delta function (or Nth power)?
Talked to Vatche about this after class. The key to this is sequential evaluation. Considering the simple case for , we evaluate one operator at a time, and never actually square the delta function
I was also questioned why I was including the delta function at this point. Why would I do that. Thinking further on this, I see that isn’t a reasonable thing to do. That delta function only comes into the mix when one takes the matrix element of the momentum operator as in
This is very much like the fact that the delta function only shows up in the continuous representation in other context where one has matrix elements. The most simple example of which is just
I also see now that the momentum operator is directly identified with the derivative (no delta function) in two other places in the text. These are equations (2.32) and (2.46) respectively:
In the first, (2.32), I thought the was somehow different, just a helpful expression found along the way, but now it occurs to me that this was intended to be an unambiguous representation of the momentum operator itself.
A second try.
Getting a feel for this Dirac notation takes a bit of adjustment. Let’s try evaluating the matrix element for the space displacement operator again, without abusing the notation, or thinking that we have a requirement for squared delta functions and other weirdness. We start with
Now, to evaluate , we can expand in series
It is tempting to left multiply by and commute that past the , then write . That probably produces the correct result, but is abusive of the notation. We can still left multiply by , but to be proper, I think we have to leave that on the left of the operator. This yields
The first integral is just , and we are left with integrating the higher power momentum matrix elements, applied to the wave function . We can proceed iteratively to expand those integrals
Now we have a matrix element that we know what to do with. Namely, , which yields
Each similar application of the identity operator brings down another and derivative yielding
Going back to our displacement operator matrix element, we now have
This shows nicely why the sign goes negative and it is no longer surprising when one observes that this can be obtained directly by using the adjoint relationship
That’s a whole lot easier than the integral manipulation, but at least shows that we now have a feel for the notation, and have confirmed the exponential formulation of the operator nicely.
Time evolution operator
The phrase “we identify time evolution with the Hamiltonian”. What a magic hat maneuver! Is there a way that this would be logical without already knowing the answer?
Dispersion delta function representation.
The Principle part notation here I found a bit unclear. He writes
In complex variables the principle part is the negative power series terms. For example for , the principle part is
This doesn’t vanish at as the principle part in this section is stated to. In (2.202) he pulls the out of the integral, but I think the intention is really to keep this associated with the , as in
Will this even have any relevance in this text?
1. Cauchy-Schwartz identity.
We wish to find the value of that is just right to come up with the desired identity. The starting point is the expansion of the inner product
There is a trial and error approach to this problem, where one magically picks , and figures out the proportionality constant and scale factor for the denominator to do the job. A nicer way is to set up the problem as an extreme value exercise. We can write this inner product as a function of , and proceed with setting the derivative equal to zero
Its derivative is
Now, we have a bit of a problem with , since that doesn’t actually exist. However, that problem can be side stepped if we insist that the factor that multiplies it is zero. That provides a value for that also kills of the remainder of . That value is
Back substitution yields
This is easily rearranged to obtain the desired result:
2. Uncertainty relation.
Using the Schwarz inequality of problem 1, and a symmetric and antisymmetric (anticommutator and commutator) sum of products that
and that this result implies
This problem seems somewhat misleading, since the Schwarz inequality appears to have nothing to do with showing 3.1, but only with the split of the operator product into symmetric and antisymmetric parts. Another possible tricky thing about this problem is that there is no mention of the anticommutator in the text at this point that I can find, so if one does not know what it is defined as, it must be figured out by context.
I’ve also had an interpretation problem with this since in 3.2 cannot mean the operators as is the case of 3.1. My assumption is that in 3.2 these deltas are really absolute expectation values, and that we really want to show
However, I’m unable to demonstrate this. Instead I’m able to show two things:
Is one of these the result to be shown? Note that only the first of these required the Schwarz inequality. Also, it seems strange that we want the expectation of the operator ?
Starting with the first part of the problem, note that we can factor any operator product into a linear combination of two Hermitian operators using the commutator and anticommutator. That is
For Hermitian operators , and , using , we can show that the two operator factors are Hermitian,
So for the absolute squared value of the expectation of product of two operators we have
Now, these expectation values are real, given the fact that these operators are Hermitian. Suppose we write , and , then we have
So we have for the squared expectation value of the operator product
With , and , this almost completes the first part of the problem. The remaining thing to note is that . This last is straight forward to show
Putting the pieces together we have
With expectation value implied by the absolute squared, this reproduces relation 3.1 as desired.
For the remaining part of the problem, with , and , and noting that for Hermitian operator (or too in this case), the Schwartz inequality
takes the following form
These are expectation values, and allow us to use 3.4 to show
For , and , this is
Hmm. This doesn’t look like it is quite the result that I expected? We have instead of ?
Let’s step back slightly. Without introducing the Schwarz inequality the result 3.4 of the commutator manipulation, and gives us
and taking roots we have
Is this really what we were intended to show?
Attempting to answer this myself, I refer to , where I find he uses a loose notation for this too, and writes in his equation 3.36
This usage seems consistent with that, so I think that it is a reasonable assumption that uncertainty relation is really shorthand notation for the more cumbersome relation involving roots of the expectations of mean-square deviation operators
This is in fact what was proved arriving at 3.6.
Ah ha! Found it. Referring to equation 2.93 in the text, I see that a lower case notation , was introduced. This explains what seemed like ambiguous notation … it was just tricky notation, perfectly well explained, but done in passing in the text in a somewhat hidden seeming way.
This problem done by inspection.
5. Hermitian radial differential operator.
Show that the operator
is not Hermitian, and find the constant so that
For the first part of the problem we can show that
For the RHS we have
and for the LHS we have
So, unless , the operator is not Hermitian.
Moving on to finding the constant such that is Hermitian we calculate
So, for to be Hermitian, we require
So , and our Hermitian operator is
6. Radial directional derivative operator.
is Hermitian. Expand this operator in spherical coordinates. Compare result to problem 5.
Tackling the spherical coordinates expression of of the operator , we have
Here braces have been used to denote the extend of the operation of the gradient. In spherical polar coordinates, our gradient is
This gets us most of the way there, and we have
Since , we are left with evaluating , and . To do so I chose to employ the (Geometric Algebra) exponential form of the spherical unit vectors 
The partials of interest are then
Only after computing these, did I find exactly these results for the partials of interest, in mathworld’s Spherical Coordinates page, which confirms these calculations. Note that a different angle convention is used there, so one has to exchange , and and the corresponding unit vector labels.
Substitution back into our expression for the operator we have
an operator that is exactly twice the operator of problem 5, already shown to be Hermitian. Since the constant numerical scaling of a Hermitian operator leaves it Hermitian, this shows that is Hermitian as expected.
directional momentum operator
Let’s try this for the other unit vector directions too. We also want
The work consists of evaluating
This time we need the , partials, which are
This has no component, so does not contribute to . Noting that
the partial is
Assembling the results, and labeling this operator we have
It would be reasonable to expect this operator to also be Hermitian, and checking this explicitly by comparing
and , shows that this is in fact the case.
directional momentum operator
Let’s try this for the other unit vector directions too. We also want
The work consists of evaluating
This time we need the , partials. The partial is
We conclude that , and expect that we have one more Hermitian operator
It is simple to confirm that this is Hermitian since the integration by parts does not involve any of the volume element. In fact, any operator would also be Hermitian, including the simplest case . Have to dig out my Bohm text again, since I seem to recall that one used in the spherical Harmonics chapter.
A note on the Hermitian test and Dirac notation.
I’ve been a bit loose with my notation. I’ve stated that my demonstrations of the Hermitian nature have been done by showing
However, what I’ve actually done is show that
To justify this note that
Working backwards one sees that the comparison of the wave function integrals in explicit inner product notation is sufficient to demonstrate the Hermitian property.
7. Some commutators.
For in problem 6, obtain
\item iii) , where .
\item iv) Show that
7. Expansion of .
While expressing the operator as has less complexity than the , since no operation on is required, this doesn’t look particularly convenient for use with Cartesian coordinates. Slightly better perhaps is
So this first commutator is:
7. Alternate expansion of .
Let’s try this instead completely in coordinate notation to verify. I’ll use implicit summation for repeated indexes, and write . A few intermediate results will be required
The action of the momentum operators on the coordinates is
We can use these to rewrite
This leaves us in the position to compute the commutator
So, unless I’m doing something fundamentally wrong, the same way in both methods, this appears to be the desired result. I question my answer since utilizing this for the later computation of did not yield the expected answer.
If there is some significance to this expansion, other than to get a feel for operator manipulation, it escapes me.
To expand , it will be sufficient to consider any specific index and then utilize cyclic permutation of the indexes in the result to generalize. Let’s pick , for which we have
It appears we will want to know
and we also want
This also happens to be , but does that help at all?
Assembling these we have
With a bit of brute force it is simple enough to verify that all these terms mystically cancel out, leaving us zero
There surely must be an easier way to demonstrate this. Likely utilizing the commutator relationships derived earlier.
We will need to evaluate . We have the first power from our commutator relation
A successive application of this operator therefore yields
So we have
This now preps us to expand the first product in the desired exponential sandwich
The exponential sandwich then produces
Note that this isn’t the value we are supposed to get. Either my value for is off by a factor of or the problem in the text contains a typo.
8. Reduction of some commutators using the fundamental commutator relation.
Using the fundamental commutation relation
which we can also write as
expand , , and .
The first is
The second is
Note that it is helpful for the last reduction of this problem to observe that we can write this as
Finally for this last we have
That’s about as reduced as this can be made, but it is not very tidy looking. From this point we can simplify it a bit by factoring
9. Finite displacement operator.
9. Part I.
the first part of this problem is to show that
We need to evaluate
To do so requires a reduction of . For we have
For the cube we get , supplying confirmation of an induction hypothesis , which can be verified
For our exponential we then have
Put back into our commutator we have
completing the proof.
9. Part II.
For state with , show that the expectation values satisfy
so our position expectation is
A change of variables gives us
10. Hamiltonian position commutator and double commutator
calculate , and .
We also have to show that
Expanding the absolute value in terms of conjugates we have
It is not obvious where to go from here. Taking the clue from the problem that the result involves the double commutator, we have
So, somewhat flukily by working backwards, with a last rearrangement, we now have
Substitution above gives the desired result. This is extremely ugly, and doesn’t follow any sort of logical progression. Is there a good way to sequence this proof?
11. Another double commutator.
Attempt 1. Incomplete.
use to obtain
First evaluate the commutators. The first is
The Laplacian applied to this exponential is
Factoring out the exponentials this is
and in terms of , we have
So, finally, our first commutator is
The double commutator is then
To simplify this we want
The double commutator is then left with just
Now, returning to the energy expression
I can’t figure out what to do with the expectation, and keep going around in circles.
I figure there is some trick related to the double commutator, so expanding the expectation of that seems appropriate
I was going in circles above. With the help of betel on physicsforums, I got pointed in the right direction. Here’s a rework of this problem from scratch, also benefiting from hindsight.
Our starting point is the same, with the evaluation of the first commutator
To continue we need to know how the momentum operator acts on an exponential of this form
This gives us the helpful relationship
Squared application of the momentum operator on the positive exponential found in the first commutator 3.18, gives us
with which we can evaluate this first commutator.
For the double commutator we have
so for the double commutator we have just a scalar
Now consider the expectation of this double commutator, expanded with some unintuitive steps that have been motivated by working backwards
By 3.22, we have completed the problem
There is one subtlety above that is worth explicit mention before moving on, in particular, I did not find it intuitive that the following was true
However, observe that both of these exponential sandwich operators , and are Hermitian, since we have for example
Also observe that these operators are both complex conjugates of each other, and with for short, can be written
Because is real valued, and the expectation value of a Hermitian operator is real valued, none of the imaginary terms can contribute to the expectation values, and in the summation of 3.24 we can thus pick and double either of the exponential sandwich terms, as desired.
 BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.
 R. Liboff. Introductory quantum mechanics. 2003.
 D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
 Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf.