• 328,167

# Archive for February, 2009

## distributions.

Posted by peeterjoot on February 27, 2009

Lectures 12, 13, and perhaps 11 of Prof Brad Osgood’s Fourier transform lectures cover Schwartz’s distributions theory in a high level fashion, and this sounds particularily useful.

In trying to Fourier transform Maxwell’s equation and reduce the Green’s function to arrive at the retarded potential solutions, one has to do a whole lot of dubious stuff, pulling delta and unit step functions out of the air from integral representations.

I found the lecture notes on this material, and have now digested some of this distribution content.

As a test application I used these ideas to solve the 1D wave equation. After listening to the distribution lectures, I wasn’t convinced that this method would be practical, but the proof is in the application. I was surprised that it is actually simpler, with no “so many words” (as Osgood puts it) requirement to pull delta functions out of magic hats from PV sinc evaluations of the exponential integral. He’s done an excellent job at making this material accessible.

I plan to write up my wave equation distribution treatment for myself shortly, and try some variations (Poisson, higher dimensional wave equations, non-homogenous cases, heat equation, …). Will these ideas also extend naturally to higher grade objects such as the bivector electrodynamic field?

## Central Limit Theorem.

Posted by peeterjoot on February 6, 2009

While doing the dishes tonight I listened to Prof Brad Osgood’s Fourier Series Lecture 10, on the Central Limit Theorem.

Without the video, this one was a bit hard to follow, but here are some notes for later followup. If I don’t write them down now trying to go through this on my own later may be tough.

First off he displays some repeated convolutions. A rect function convolved with itself gets smoother, and a bit bell curvish. Another convolution with a rect function gets more so, and after a few, it is “spookily bell curvish”.

He shows the same thing, and again I am imagining what he showed, a randomly generated function with lines interconnecting random points in some interval, and repeatly convolves this function with itself, again producing a bell curve like shape. “Spookie!”;)

Next is a brief mention of random variables, probability density functions, expectation values, variance, and so forth. With these ideas it is shown that given the pdfs for two identically distributed random variables, the probability that two of these is less than some value is shown to have the form of a convolution of the two. The process can be repeated, with the end result producing an n-fold convolution of pdfs.

Now, hearing that treatment without seeing it, at his machine gun speech pace, I didn’t know exactly what he had ended up writing down. It does sounds like it is straight forward enough to go through this, and I do seem to recall covering this eons ago back in school. I probably have this in a text book somewhere if I can’t figure it out on my own later.

With the background covered the finale is a Fourier transformation of the convolution, since the transform turns the convolution into a product in the frequency domain. I’ll have to try this myself for some simple cases with two, three, four pdfs convoluted, and I can probably see where he went with this. The last bit ended up being a Taylor series expansion of an exponential. I completely missed in my imagined watch along how it got to that point. In the end after discarding some higher order terms he gets a Gaussian, and therefore a Gaussian again with the inverse Fourier transform in the limit of infinite repeated convolutions of an arbitrary identically distributed pdfs (IIDPF or something was mentioned).

Now, why is this interestingly enough to care about? I have an intuition or suspicion that these ideas are behind the Heisenberg uncertainty principle. Certainly the idea that Fourier transform analysis of probability functions is a useful tool is particularly interesting in the context of Quantum Mechanics.

I think that I’m going to have to stop listening to educational podcasts for a while. I am getting behind keeping up with the ideas coming out, and it will get overwhelming before too long!

## Hamiltonians, phase space, and Poisson brackets.

Posted by peeterjoot on February 5, 2009

Have listened to Susskind’s classical lecture 6 now. This covers Hamiltonian’s, phase space, chaotic systems, Energy as the conserved quantity for a time shift of the Lagrangian, and Poisson brackets (time evolution of any function of position and momentum in terms of the Hamiltonian and derivatives). All in all, an excellent and informative lecture.

Some of these I’d seen bits and pieces of these in various QM contexts, it is good to see how these ideas come up in classical physics.

Have a lot of work to do to get a good working knowledge to match what I’ve now listened to.

Some things to try. Work a number of problems in both the Hamiltonian and Poisson formulations. Probably all the ones I’ve done for Lagrangians

1. Free particle.
2. Pendulum.
3. Double Pendulum.
4. Harmonic oscillator.
5. Connected masses on strings.

This also sounds like a good opportunity to go back and revisit the Cambridge partIII lecture notes that covered their GA re-formulation of the Hamiltonian and Poisson bracket formalism. I think they used a real bivector space to model the n-particle phase space. Lut said it was pretty slick and I was able to follow along with the math parts, but had absolutely no idea what the point of the whole thing was. We had some followup conversations on this topic where he was using matrixes to model the Hamiltonian time evolution in phase space as rotations I think. Have a bit of a clue what that was about now, and should go back and explore that.

Phase space I’d seen eons ago back in linear differential equations (Lorentz attractors, …). It will be good to revisit some of that with classical mechanics as a motivator.

Now, the Energy relationship to time Lagrangian invariance is an interesting one to pursue, but it is a little bit sneaky. Before high school physics, I found the concept of energy a very fuzzy idea. School teachers talk about conservation of energy, “heat is a form of energy”, “light is a form of energy”, “mass is converted to energy in nuclear reactions ($E = mc^2$). My daughter Aurora who is in grade 4 now bandies some of these sort of phrases around. It bugs me because I can see she doesn’t have any idea what this noun energy is. It bugs me even more because I can’t give her a definition that I find acceptable. Once I took high school physics the fuzz lifted for a while. Energy becomes nothing more than a measured quantity. It is the capability of a system to do work, apply force across a distance, or produce motion, and so forth.

However, now that I have learned a bit of relativity the fuzz is back. Energy losses its place as an entity of its own, and one has an energy momentum abstract entity to replace it. I haven’t quite figured out where it fits anymore in the big picture. How is the proper time change of four vector momentum and the classical ideas of Energy related? We have energy and momentum associated with fields (wave energy, electromagnetic energy, stress energy tensors, …). There’s a heck of a lot of abstraction getting into the picture, and I don’t have a handle on it all yet.

In electrostatics we have an energy associated with the field, $\epsilon_0 \mathbf{E}^2/2$, and that one is easy enough to work out. Calculate the energy required to bring two like signed charges together from infinity to some separation. We can do the same thing with a set of charges, summing the energy over all pairs. Switching to a superposition argument, double summing over all charges and moving to a continuous description (double summing over all space) one has this electrostatic energy. This is basically nothing more than a fancy Work = Force time Distance, argument, and I’ve done it a few times. The last time was in csg units to get a handle on the older notation that Bohm uses in his Quantum Mechanics text.

It’s not clear to me how to get the magnetostatics equivalent $\epsilon_0 c^2 \mathbf{B}^2/2$. With magnetic forces being perpendicular to the direction of motion, one needs to somehow formulate the problem as the work done against the magnetic field. Last time I tried to get there, I only ended up with the Biot-Savart law, which wasn’t what I wanted, and I’ll have to try again. I’m sure I’ve got a couple texts with explainations of this, but it will never really make sense until I’ve done it myself (probably a few times in different ways).

What’s also not clear to me is how one moves from the statics to the dynamics cases, and figures out a complete energy of the field when the charges are allowed to move (or move at non-constant rates or varying paths in the anti-magnetostatics case). This argument I believe can only be done in a relativistic context. If one assumes that the field energy is just the sum of the electric and magnetic “statics” energies, then the calculation showing that there is an associated conservation law involving the Poynting vector is not too difficult, but that’s not the same as getting a good handle on things from a Force times Distance point of view.

Hmm, as for demonstrating that $\epsilon_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) ) /2$ is the field energy, a promising thought is just boosting the field from a static electric charge distribution would probably do the job. Will have to try that.

Before listening to the Susskind lecture, the ideas above were what I felt I still had to work through for myself. Now here comes a completely different point of view on the subject. Define energy in terms of Lagrangian time shift invariance. That’s something that I can work out easily enough in the classical case and will have to do so (in particular for the infinite set of connected springs in a line that generates the wave equation). This should also be easy enough of an oversize to do for the electrostatics Lagrangian. I don’t think that I’ve ever written out the Lagragian for the magnetostatics case in isolation … I wonder what that looks like? I assume that this will produce the desired B^2 term. For the relativistic form of the Lorentz force equation, I’d guess that proper time shift invariance should be the interesting quantity to look at. Will that produce an energy momentum relationship? There are many things to try to explore this idea, and perhaps doing so will help lift some of the energy fog. The more I learn, the more work I have to do to followup on the ideas to logical conclusions.