Traditionally one speaks of the historic development of the laws of electromagnetism. The Greeks noticed that there were magnetic rocks, and that you could produce static electricity by rubbing on particular substances like amber. Franklin, Coulomb, Ampere, Gauss, Faraday, and Maxwell investigated these effects and developed a complete mathematical theory of the electromagnetic field. Most people follow this historic path, and show how the laws were deduced using kites, magnets and wires.
We'll take a completely different approach. Instead of noticing these macroscopic electromagnetic effects like lightening and compasses, we'll take a microscopic approach and see if we can understand why God would make electricity, and what it really does. I presume in this chapter that you've taken a standard undergraduate course in Quantum Mechanics, and are at least passingly familiar with the basic ideas. Quite interestingly, it will turn out that if you know basic quantum mechanics and make one extra very simple assumption, you can derive that there is a requirement for an electromagnetic field. We'll derive this, and thereby gain some new insight into the source and effect of electromagnetism. We'll also see that this assumption leads to a quite powerful conservation law. It will seem like a long and strange journey, but we'll wind up in a very interesting place. Also, this development of electromagnetism is fully compatible with quantum field theory, and in fact anticipates many of the important ideas. Because of this, I think it's a more appropriate introduction to the theory. Isaac Newton and James Maxwell would find this path very strange, but then they would also find professors using laptop computers, video projectors and laser pointers to give powerpoint presentations mind boggling and possibly witchcraft.
In 1900 Max Planck published his paper on black body radiation. It was known that hot bodies radiate with a particular spectrum, called the blackbody spectrum. For example, a normal incandescent electric light bulb radiates light because its temperature produces blackbody radiation centered in the visible radiation. Planck thought it should be possible to calculate this spectrum using the laws of thermodynamics. Today we are often taught that this was a big outstanding problem in physics, but at the time no one but Planck was really very interested in this. The basics of the calculation are not hard to understand. Presume you have a large block of metal with a cubic hole in it. Since the block is metal, a conductor, there must be no electric field in the metal. So, any electric field in the chamber must have the value zero at the edges of the chamber. We'll also presume that this is a magic metal which absorbs any light of any frequency which hits it, hence "black body." Since light has energy, then after a time the metal has reached an equilibrium temperature, and it's now radiating light as fast as it's absorbing light. The light being radiated has a particular spectrum, which depends only on the temperature of the metal: the black body spectrum.
Now, since the electric field has the value zero at each side of the chamber, there must be specific frequencies of light which are possible in this chamber. The lowest frequency has a wavelength which is exactly twice the width of the chamber. If the wavelength were any longer, the electric field would not have the required value zero at the edges of the chamber. The next lowest frequency has a wavelength which is the same as the width of the chamber. The allowed wavelengths have the property that 2nλ = L, where L is the width of the chamber and n is any integer. The allowed frequencies are f = 2π / λ. We may think of these allowed frequencies as independent little oscillators, each oscillator like a guitar string that can be vibrating or still. If the "string" is vibrating, there is a photon present - stronger vibrations mean more photons of that frequency. The energy in the chamber should be equally divided up among the oscillators. Thermodynamics assures us that when a system is in equilibrium, the energy is equally divided among every possible degree of freedom. Here's our big problem: there are an infinite number of these oscillators, and so there should be only a vanishingly small amount of energy in each individual oscillator.
Planck knew from measurements that in fact the energy curve was not vanishingly small everywhere and extended to infinite frequencies. So, his problem was to come up with some reason why the highest frequency oscillators are never excited. Planck assumed that each oscillator had a minimum excitation energy E = hω = h 2π f. Now, the very highest frequencies would never be excited because there simply wasn't enough energy to make the minimum. The interpretation of this minimum energy was left open. Planck was awarded the Nobel prize in 1918 for this work.
With this assumption, Planck invented the field of quantum mechanics. His constant, h, remains central to the entire field. His assumption that energy sometimes comes in discrete packets which are multiples of h is now understood to be a fundamental law of nature. Curiously, Planck himself was not comfortable with these ideas. In fact, at the time Planck published his paper on black body radiation, he did not believe in atoms - Planck thought that matter was continuous. Plank was never a proponent of the quantum theory.
Max Karl Ernst Ludwig Planck, 1858 to 1947 | Albert Einstein, 1879 to 1955 | Louis Victor Pierre Raymond duc de Broglie, 1892 to 1987 |
Erwin Rudolf Josef Alexander Schrödinger, 1887 to 1961 | Hermann Klaus Hugo Weyl, 1885 to 1955 |
Another interesting problem in physics was something called the photo electric effect. It was known that if you shine a bright light upon certain metals, electrons were ejected by the metals. The energy in the light somehow bashed into the metal atoms and caused electrons to come flying out, like pool balls after the break. What was troublesome was that the electrons came out with a particular energy. If you made the light brighter you got more electrons, but the energy was the same. If you made the light a different color, then the energy of the electrons varied. This seemed very difficult to understand.
Einstein published a paper in 1905 explaining this phenomenon. He started with Planck's assumption, that light comes in little discrete energy packets with each packet having energy E = hω. Next, he assumed the electrons were bound to the atoms with some particular energy. Now it's easy to understand the effect. No electrons are ejected until the frequency of the light gets high enough that the individual photons have more energy than the binding energy. The energy of the ejected electrons will be the difference between the photon energy and the binding energy. When the light gets brighter, there are more photons but the energy held by each photon stays the same. When the light's color changes, the photon energy changes. This paper brought Planck's hypothesis into the laboratory where it could be measured and verified, and produced a solid foundation for quantum mechanics. This was the paper the Nobel committee cited when awarding Einstein his Nobel prize in 1921.
300 years earlier people wondered if light were particles or waves. Isaac Newton answered this question. He said that sharp objects cast sharp shadows, and that this proved that light is made of particles. For if light were waves, he reasoned, the waves would "flow" around the sharp corners. However, there were equally compelling arguments that light consisted of waves. In 1802, Thomas Young decided to test the two competing hypothesis. He made an apparatus which was a solid wall with two slits. If light were particles, then the particles would go through one slit or the other, and there would be two distinct bright spots behind the wall, one spot behind each slit. However, if light were waves, then the light would go through both slits at the same time and would leave a pattern of dark and light stripes on the wall due to the interference from the two slits.
The results.
Young performed his experiment and observed an interference pattern. Newton's claim that light was particles was overturned, and for the next 100 years light was considered waves. Until Planck and Einstein screwed that up. Einstein seemed to be saying that light was a particle, since it came in discrete little packets. If not for the verification of the photo electric effect in the laboratory, it's unlikely that Einstein's theory would have been very popular at all.
Why do the photons make the cute little striped bands? This is due to a very important effect called interference. The photons are a wave phenomenon (they're also particles, but we're not worried about that at this instant). In a very simplified fashion, we can think of the photon wave function as cosine( 2π x/λ ). λ is the photon's wavelength. The cosine function varies from -1 to 1. If you compare cosine(0) to cosine(π), the first has the value of 1 and the second has the value of -1. Photon waves cancel out if one wave has the value 1 and the other wave has the value -1. The peak of one wave just fills in the valley of the other, and they average to zero and cancel out. On the other hand, if both waves have the value 1, the two peaks add.
When the photon waves come through the two slits, if they travel the exact same distance then the cosine functions add, and the photon is said to "constructively interfere." If the waves take different angles, however, then they will have different distances to travel. If one path is precisely one half wavelength longer than the other, then the cosine functions have opposite values and cancel each other out. The photon is said to "destructively interfere." If the angles get even steeper, then one path will get even longer, up to a full wavelength longer. Now the photon constructively interferes again. At a slightly steeper angle, again we get destructive interference when one path is one and a half wavelengths longer than the other. And so we get the cute little stripes shown above - bright where the cosine waves add and the photons constructively interfere, and dark where the cosine waves subtract and cancel and the photons destructively interfere.
In the diagram above we see four particular spots where the photon might wind up. The top spot will be bright, as one path is 22 wavelengths long and the other is 23. These paths end in the same phase and constructively interfere - this will be a bright spot. Next down, in red, one path is 22 wavelengths long, the other is 22½ wavelengths long. These paths end in opposite phases and destructively interfere - this will be a dark spot. Next down is another bright spot where one path is 21 wavelengths and the other is 20 wavelengths, giving constructive interference. The fourth spot is a dark spot where one path is 23 wavelengths and the other is 22½ wavelengths long, giving destructive interference.
An important point here is that photons are indeed also particles - they are diffuse particles that act like waves. An individual photon goes through both slits at the same time, and then recombines at the screen. Each photon interferes only with itself - photons do not interfere with each other. You can test this yourself - get a couple of flashlights, or better yet a couple of laser pointers. Aim one at a wall. Now cross that beam with the other flashlight or laser pointer, and see if the spot from the first light changes. It won't. Actually, to be perfectly correct there is a very subtle effect in relativistic quantum field theory that says about one photon in 20,000 does interfere with another. You won't be able to see this with your naked eye, but we can measure it in the lab.
Photon wave functions are actually complex functions of time and space: cosine( 2π (x/λ - ft) ) + i sine( 2π (x/λ - ft) ) , where f is the photon's frequency. We usually write this as ei2π (x/λ - ft) . This makes it much harder to draw the picture, but the final result is the same thing - when the two waves are out of phase by 180°, they cancel. When the two waves are in phase, they add.
Some time after Einstein published his paper on the photo electric effect, he was asked in a conference if he thought light was waves or particles. He responded, "You will have your answer when you find a theory that says particles and waves are the same thing."
A few years later, when it became clear that this particle-wave duality meant that everything is a little bit fuzzy, and we can seemingly only predict probabilities, not certain outcomes, Einstein came to be deeply disgusted with the quantum theory. For example, in the diagram above we can calculate where photons will appear (bright spots) and where they won't (dark spots) but we cannot say which particular bright spot a particular photon will choose. Our current math tells us this is just random chance. Whenever quantum mechanics came up, Einstein would shake his head and sadly say "Surely you don't believe that God throws dice."
In 1924 de Broglie (pronounced "de Broyee") was a graduate student in physics, and was fascinated by the quantum hypothesis of Planck and Einstein. At the time, there was a very simple theory of the atom proposed by Neils Bohr which seemed to mostly account for the light spectrum emitted by excited atoms. However, there was no good understanding of why Bohr's mathematics should be correct. Furthermore, although Bohr's theory did pretty well for hydrogen, it had serious difficulties with more complicated atoms and molecules.
de Broglie proposed that if light, thought to be a wave, came in little particles, then electrons, which were thought to be little particles, should also be associated with waves. He called these guiding waves "pilot waves." de Broglie said that Einstein had told us that E = mc2 and E = hω, so these pilot waves should have a frequency which was ω = E / h = mc2 / h. de Broglie then showed that Bohr's quantum hypothesis for atoms was simply the hypothesis that the electrons traveled in orbits which were an integral number of these pilot wavelengths.
de Broglie dutifully submitted his thesis for consideration for a PhD. His advisers thought his ideas were so strange as to be almost unbelievable. They felt that they wanted some outside opinions on de Broglie's work. The obvious person to ask was Einstein, so de Broglie's advisers sent a copy of his paper to Einstein and asked for his opinion. Einstein sent back a note that this paper was the most original thinking he had seen in at least a decade, and that he thought de Broglie was a true genius. With that kind of recommendation, de Broglie was awarded his PhD, and indeed a Nobel prize only 5 years later in 1929.
Now the question is, are electrons particles or waves? The simple way to answer this question is to do with electrons exactly what Young had done with light 120 years earlier: shoot a bunch of electrons through a very small two-slit apparatus, and photograph the results. It's been done, here's the pictures:
100 electrons |
3,000 electrons |
70,000 electrons |
The results are as bad as they could possibly be. Each individual electron leaves a clear, small dot on the screen. This is obviously particle behavior. However, when we let the photograph expose 70,000 electrons, the bands are there, big as a house. This is why some people say we can only predict the probability that an individual electron will hit a particular spot. The bands, they would say, show the probability interference. This is photographic evidence for Einstein's statement, that particles and waves are the same thing.
Almost immediately Erwin Schroedinger got hold of de Broglie's thesis containing his pilot wave hypothesis, and thought it simply fascinating. Schroedinger thought that if there were waves, there must be a wave equation, and he set out to find it. Schroedinger started by assuming that the pilot waves could sometimes be normal traveling waves, so that ψ = ei(kx - ωt). Armed with de Broglie's equation ω = E / h and his own knowledge of special relativity, Schroedinger then said that the waves should have the form ψ = ei/h(px - Et). That is, Schroedinger said that the energy and momentum of the wave were related to the wave number and frequency with the equations E = hω and p = hk. Now, Schroedinger was ready to look for his wave equation.
Schroedinger knew from special relativity that E2 = p2c2 + m2c4. This is the form of the equation E = mc2 when things are moving. Schroedinger noticed that for his traveling wave dψ/dt = -i/h E ψ, and dψ/dx = i/h p ψ, so he tried the equation d2ψ/dt2 = c2 d2ψ/dx2 - m2 c4 h2 ψ. Unfortunately, as can be easily seen from Einstein's equation E2 = p2c2 + m2c4, this leaves us with E = ±√(p2c2 + m2c4). Schroedinger was completely mystified about how to handle these negative energy solutions - in fact, the negative energy solutions seemingly required by relativistic quantum mechanics were not really understood for 20 more years. After unsuccessfully wrestling with this problem for several weeks, Schroedinger reluctantly backed up to the non-relativistic equation E = p2 / 2m. This lead him immediately to his famous equation
ih dψ/dt = -h2 / 2m d2ψ/dx2.
Schroedinger was an old hand at wave equations, just as you will be in a couple of months, and was able to solve this equation almost immediately for a central potential, that is for a hydrogen atom. His results were in very good (but not perfect) agreement with the hydrogen spectrum, and were much more precise than Bohr's results. He too was duly awarded a Nobel prize in 1933.
Some time after Schroedinger published his equation, it became obvious to him that his equation was mathematically equivalent to Heisenberg's equations, and that therefore Schroedinger's equation also implied the Heisenberg Uncertainty Principle. Many physicists were deeply bothered by this idea, including Schroedinger himself. When asked years later how he felt about quantum mechanics, he answered, "I don't like it, and I'm sorry I ever had anything to do with it."
Shortly after Einstein published his theory of General Relativity, several people went to work on extending his ideas. Weyl (Hermann Klaus Hugo, but Peter to his friends) asked the question, "What if your standard of length were to change from place to place?" Machinists have high precision blocks of metal of standard lengths which they call gauge blocks. Weyl called his theory "gauge invariance." That is, Weyl asked if he could somehow make the equations of General Relativity invariant when the length standard, the "gauge," changed. In the applications to General Relativity, this idea of gauge invariance has not turned out to be of central importance. However, the general idea has become central to modern physics - today, all of quantum field theory is based on gauge invariance, although today we often call this idea "phase invariance" for reasons which will become obvious in a few paragraphs.
The two slit experiment shows that electrons act like waves. Schroedinger's equation starts with the assumption that the electron has wave behavior. Waves have a phase, so this must mean electrons have a phase. Can we measure this phase? The photographs above would seem to indicate that we can verify that electrons have phase from their interference behavior, but the individual electron dots certainly show no sign of anything like a phase. Evidently the electrons have a phase, but it's unobservable. This means that Schroedinger's equation should be invariant under phase transformations.
In Schroedinger's equation, we represent the electron with a wave function, ψ, which obeys a wave equation, ih dψ/dt = -h2 / 2m d2ψ/dx2. Suppose we were to ask the question, "What if everyone everywhere changed their definition of phase?" This is easy to represent mathematically, we ask what if there were a new wave function, ψ' = eiχ ψ. χ is some constant, just a number, and represents our new choice of phase origin. Does ψ' obey Schroedinger's equation whenever ψ does? It's immediately obvious that it does - eiχ is just a number, a constant, so the derivatives have no effect on it. All we've done is multiply Schroedinger's equation by this same constant. This is a good start, it helps reassure us that this mysterious electron phase is unobservable.
Weyl, however, decided to ask a different question. What if you have a physics lab, and I have a different physics lab, and we each decide on our own phase convention? In fact, there could be an enormous number of physics labs in the universe, each with their own head strong lab director and each with their own phase convention. Not only that, but different labs could have their directors retire at different times, and the replacement directors might change their minds about their laboratory's phase convention. Of course, all graduate students know that lab directors are also prone to changing their minds on any occasion for any or no reason. When we wanted to ask about changing the phase convention, above we used the factor ψ' = eiχ ψ. This corresponds to everyone in the universe changing their standard all at the same time. Now we ask, what if each individual lab director is allowed to make his own choice, and to change his mind whenever he wants? This means our phase multiplier χ must be promoted to a function, χ(x,t). Now our new wave function is ψ' = eiχ(x,t) ψ. We ask, does ψ' satisfy Schroedinger's equation whenever ψ does?
dψ' | = ih eiχ(x,t) ( | dψ | dχ | ) = ih eiχ(x,t) ( | d | dχ | ) ψ | ||||
ih | + | ψ | + | ||||||||
dt | dt | dt | dt | dt |
-h2 | d2ψ' | -h2 | eiχ(x,t)( | d2ψ | dψ | dχ | d2χ | ) = | -h2 | eiχ(x,t) ( | d | dχ | )2ψ | ||||
= | + 2 | + ψ | + | ||||||||||||||
2m | dx2 | 2m | dx2 | dx | dx | dx2 | 2m | dx | dx |
It does not. Houston, we have a problem. We have some pesky terms left over, the dχ/dt and the dχ/dx terms. Can we fix this? I'm going to make a suggestive textual substitution. dχ/dt ≡ e V(x,t), and dχ/dx ≡ e A(x,t). e is the charge on the electron, and V and A are what's left over after dividing dχ/dt and dχ/dx by the electron charge. Now the equation is:
dψ' | = ih eiχ(x,t) ( | d | ) dψ | ||
ih | + eV | ||||
dt | dt |
-h2 | d2ψ' | -h2 | eiχ(x,t) ( | d | )2 ψ | ||
= | + eA | ||||||
2m | dx2 | 2m | dx | |
So we see that ψ' is a solution of Schroedinger's equation when we place the original wave function ψ in a potential V = 1/e dχ/dt and A = 1/e dχ/dx. In other words, if we change the phase of the original wave function by a different amount at different places and times, to make the new wave function a solution of Schroedinger's equation we have to add something to space-time. We have to add a set of functions that look just like a particular voltage and a particular vector potential. In a rather abstract sense, we have just derived the need for the electromagnetic field from our requirement of phase invariance.
When we changed the phase of our wave function by a constant, the same constant applied everywhere in the universe for all time, we call that a global gauge invariance or a global phase invariance. When we allowed the phase convention to be set differently at each point in space-time, we call that a local gauge invariance or a local phase invariance. We see that the requirement of local phase invariance is much stricter than the requirement for a global phase invariance.
Let's investigate this phase invariance a bit further. We'll choose a very simple case: I'll have a lab in Los Angeles with a particular phase convention, and you'll have a lab in Chicago where you use the same phase convention as the entire rest of the universe. Let's say my phase convention is that I define phase to be moving faster than your phase. χ (x,t) = V t within my lab, where V is some constant. Immediately outside of my lab I have a circular field which is 100 meters wide, and the phase convention in my field will be Vt (r - 100). So the phase convention changes linearly from my convention to your convention over the width of my field. What does this mean? In my lab, dχ/dx = 0 and dχ/dt = V. In my field dχ/dx = Vt and dχ/dt = V (r - 100). So, me changing my phase faster than you change your phase is exactly like my running my lab at a different voltage than you do. Over the width of my field the voltage difference drops to 0.
Does this make sense? Yes! Suppose we ask the question, what are the solutions to Schroedinger's equation which are separable in time and space? That is, ψ(x,t) = T(t)R(x). Now, Schroedinger's equation is
dψ | -h2 | d2ψ | ||
ih | = | |||
dt | 2m | dx2 |
d(TR) | -h2 | d2(TR) | ||
ih | = | |||
dt | 2m | dx2 |
dT | -h2 | d2R | ||||
ih | R | = | T | |||
dt | 2m | dx2 |
Divide this equation by RT, giving
1 | dT | -h2 | 1 | d2R | ||
ih | = | |||||
T | dt | 2m | R | dx2 |
Since the left hand side depends only on t, and the right hand side depends only on x, and the two sides are equal everywhere and for all time, they must each be equal to the same constant. We'll call that constant k. Now, we have
ih dT/dt = k T
We can solve this immediately, T = e-ikt/h. We see that Schroedinger's equation tells us that the wave function's time dependence is a simple exponential, and our constant k is apparently the energy in the wave function. What happened when I said my phase was changing faster than yours? My wave function's time dependence is now e-i/h (k+eV)t. The rate of change of phase for a particle is proportional to the particle's energy. This means I'm apparently running my lab at a different potential energy than you are, and that potential energy is eV. This is exactly what we expect from our understanding of electromagnetism.
I could come up with far more complicated functions χ(x,t), but I think you see now that any function I come up with simply implies a corresponding electromagnetic field to compensate. You do a couple derivatives and you get V and A, the scalar and vector potential. You do a few more derivatives and you get the corresponding E and B fields, the electric and magnetic fields. As long as you can write down the function and it's differentiable, it simply implies an electromagnetic field. If the function you write down is not differentiable then it's also not physically realizable, as it would imply infinite energies and infinite fields.
What have we learned about the electromagnetic field? Well, we see now what it does. When one electron is near another electron, its phase changes faster than if it were not near the electron. The electromagnetic field of an electron changes the phase of other electrons. Furthermore, we know that the potential around an electron drops off as 1/r. This means that the phase effect also drops off as 1/r. If the second electron is not a perfect point, but rather is a bit diffuse in space, then the portion of that electron which is nearby has its phase changing faster than the portion which is far away.
Let's see if we can understand why changing the electron's phase also changes it's path through space-time. The phase of an electron changes proportional to its energy. If you raise the potential energy, the phase changes faster. Free particles must move away from high potential energy places. If they didn't move at all, then there would be no response to a potential gradient, no response to a force. If they moved towards the high potential, then the particle would gain potential energy and it would also gain kinetic energy due to its movement. This would be bad. We need to make sure our understanding will make electrons repel each other.
Above we see two electrons, represented as little balls. Do electrons really look like little balls? I'm not really very confident on this point, but I'm pretty sure they have some size. Anyway, let's pretend the electron on the left, A, is somehow stuck where it is, electron super glue perhaps. The electron on the right, B, finds itself in the field of electron A. Of course electron A finds itself in the field of electron B, but electron A is stuck in place so we're not worried about that right now.
At the center of electron B the phase is moving as (mc2 + eV), where mc2 is the energy due to the electron's mass, and eV is the electron charge times the voltage due to electron A. But, things look a bit different at the left hand edge of this electron. At that edge, the phase is moving as (mc2 + 1.1eV) - the left hand edge is about 10% closer to electron A, so the potential felt there is about 10% higher. Similarly the phase at the right hand edge of electron B is moving as (mc2 + .9eV) - the right hand edge is about 10% further away from electron A, so the potential felt there is about 10% lower. The electron has a problem: different parts of the electron are having their phase advance at different rates. Electrons don't like this: they want their phase to be the same all over, as the two slit experiment shows. What can this electron do about it? It would really like to have the same phase everywhere.
Remember, the electron's wave function is ei/h(px - Et). The left hand side of the electron has a bit more energy than the center or the right hand side of the electron, so the phase is moving a bit faster. However, the phase is also dependent on the momentum, and with the opposite sign. If the left hand side of the electron starts moving away from electron A, the phase will slow down there. Similarly, if the center of electron B starts moving away from electron A, its phase will slow down to match the right hand edge. The electron will start moving. Due only to the phase rate change across the electron, it will start moving away from the source of the phase change. The movement will be proportional to the phase difference across the electron, that is proportional to the voltage difference across the electron.
Why does the electron start moving? If we wait just a little bit without moving, the left hand edge of B will be 180° out of phase with the center, and the two pieces will cancel out. We learned last chapter that charge can't just evaporate, it has to move somewhere. If it moves towards electron A, the phase difference only gets bigger and bigger and the cancellation problem only gets worse and worse. The electron has to get out of here, and the only way out that reduces the phase problem is to the right. Alternatively, if you prefer, electrons are always jiggling a little bit. This is due to, take your pick, the Heisenberg Uncertainty Principle, or the Zero Point energy, or some strange kind of fundamental Brownian motion (n.b.: that last subordinate clause would be known on the literature side of the campus as "foreshadowing.") When the electron jiggles to the left, cancellation wipes it partially out. When it jiggles to the right, it temporarily escapes the cancellation. So, rightwards jiggles are highly preferred in this environment.
Does this make sense? Yes! In earlier days, we would have said that there is a force on the electron which is proportional to the gradient of the voltage, which means the force is proportional to the difference in voltage from one side of the electron to the other. Why is the force proportion to the gradient of the potential? It just was. We are now in a position to deduce this. Suppose we let some very little bit of time go by, call it dt, and see how much the phase changes due to the voltage. We watch a little bit of space, call it dx, and see how much the momentum changes over that distance. We already know the answer: the net phase has to add up to zero, so Δp dx = -eΔV dt. Now, divide both sides by dt dx, and we get Δp/dt = -eΔV/dx. Newton had a name for Δp/dt, he called it the force, F. Maxwell had a name for -ΔV/dx, he called it the electric field, E. We're left with F = eE, which is the Lorentz force law. We have deduced this law, except we used only the most basic fact of quantum mechanics: electrons are waves which can constructively and destructively interfere with themselves. We no longer need these concepts of force and acceleration - we can get all the same results by talking about energy, phase (how the electron "feels" about itself), and constructive interference (cooperation, if you prefer). What will the feminists complain about when they find out physics is no longer about evil paternalistic concepts like "force," but rather about enlightened concepts like "energy," "feelings" and "cooperation"?
In order for this to all make sense, we had to rely on the fact that for the electron wave, the momentum contribution to the phase had the opposite sign as the energy contribution. That is, the wave function is ei/h(px - Et) , not ei/h(px + Et). Up until now, I have simply asserted that the wave function has this minus sign in it, and that Schroedinger put it there too. If it had been a plus sign, however, electron B would have moved towards electron A, and everything would be all backwards. Energy would increase in every interaction, and conservation of energy would be out the window. The universe would be a series of ever increasing explosions. So, it appears this minus sign is critically important. Where did it come from? The answer is that if you write the phase function in relativistic notation, it's not i/h(px - Et), it's i/h pμxμ. In relativity, space and time are parts of the same thing, so we write everything as four-vectors. The energy of a particle is the time component of the momentum four-vector. We live in a universe where light always travels at the same speed, and nothing can ever accelerate to faster than light. This fact is enough to require a minus sign on the time component of all four-vectors - using only this fact, you can prove that there has to be this minus sign. So, the minus sign was not put there because Schroedinger was psychic and knew that 20 years in the future quantum field theory would require it. It was put there by God when He decided that in this universe there would be a basic speed law - the speed of light - and nothing would be allowed to break the speed law. It just took us about 14 billion years to pop up and figure it out.