Last week, I mentioned some interesting new results in my corner of physics. I’ve now finally read the two papers and watched the recorded talk, so I can satisfy my frustrated commenters.
Quantum mechanics is a very cool topic and I am much less qualified than you would expect to talk about it. I use quantum field theory, which is based on quantum mechanics, so in some sense I use quantum mechanics every day. However, most of the “cool” implications of quantum mechanics don’t come up in my work. All the debates about whether measurement “collapses the wavefunction” are irrelevant when the particles you measure get absorbed in a particle detector, never to be seen again. And while there are deep questions about how a classical world emerges from quantum probabilities, they don’t matter so much when all you do is calculate those probabilities.
They’ve started to matter, though. That’s because quantum field theorists like me have recently started working on a very different kind of problem: trying to predict the output of gravitational wave telescopes like LIGO. It turns out you can do almost the same kind of calculation we’re used to: pretend two black holes or neutron stars are sub-atomic particles, and see what happens when they collide. This trick has grown into a sub-field in its own right, one I’ve dabbled in a bit myself. And it’s gotten my kind of physicists to pay more attention to the boundary between classical and quantum physics.
The thing is, the waves that LIGO sees really are classical. Any quantum gravity effects there are tiny, undetectably tiny. And while this doesn’t have the implications an expert might expect (we still need loop diagrams), it does mean that we need to take our calculations to a classical limit.
Figuring out how to do this has been surprisingly delicate, and full of unexpected insight. A recent example involves two papers, one by Andrea Cristofoli, Riccardo Gonzo, Nathan Moynihan, Donal O’Connell, Alasdair Ross, Matteo Sergola, and Chris White, and one by Ruth Britto, Riccardo Gonzo, and Guy Jehu. At first I thought these were two groups happening on the same idea, but then I noticed Riccardo Gonzo on both lists, and realized the papers were covering different aspects of a shared story. There is another group who happened upon the same story: Paolo Di Vecchia, Carlo Heissenberg, Rodolfo Russo and Gabriele Veneziano. They haven’t published yet, so I’m basing this on the Gonzo et al papers.
The key question each group asked was, what does it take for gravitational waves to be classical? One way to ask the question is to pick something you can observe, like the strength of the field, and calculate its uncertainty. Classical physics is deterministic: if you know the initial conditions exactly, you know the final conditions exactly. Quantum physics is not. What should happen is that if you calculate a quantum uncertainty and then take the classical limit, that uncertainty should vanish: the observation should become certain.
Another way to ask is to think about the wave as made up of gravitons, particles of gravity. Then you can ask how many gravitons are in the wave, and how they are distributed. It turns out that you expect them to be in a coherent state, like a laser, one with a very specific distribution called a Poisson distribution: a distribution in some sense right at the border between classical and quantum physics.
The results of both types of questions were as expected: the gravitational waves are indeed classical. To make this work, though, the quantum field theory calculation needs to have some surprising properties.
If two black holes collide and emit a gravitational wave, you could depict it like this:
where the straight lines are black holes, and the squiggly line is a graviton. But since gravitational waves are made up of multiple gravitons, you might ask, why not depict it with two gravitons, like this?
It turns out that diagrams like that are a problem: they mean your two gravitons are correlated, which is not allowed in a Poisson distribution. In the uncertainty picture, they also would give you non-zero uncertainty. Somehow, in the classical limit, diagrams like that need to go away.
And at first, it didn’t look like they do. You can try to count how many powers of Planck’s constant show up in each diagram. The authors do that, and it certainly doesn’t look like it goes away:
Luckily, these quantum field theory calculations have a knack for surprising us. Calculate each individual diagram, and things look hopeless. But add them all together, and they miraculously cancel. In the classical limit, everything combines to give a classical result.
You can do this same trick for diagrams with more graviton particles, as many as you like, and each time it ought to keep working. You get an infinite set of relationships between different diagrams, relationships that have to hold to get sensible classical physics. From thinking about how the quantum and classical are related, you’ve learned something about calculations in quantum field theory.
That’s why these papers caught my eye. A chunk of my sub-field is needing to learn more and more about the relationship between quantum and classical physics, and it may have implications for the rest of us too. In the future, I might get a bit more qualified to talk about some of the very cool implications of quantum mechanics.
This post reminded of something that came up in a QFT class I took years ago and that’s that there’s an uncertainty relation that’s not typically discussed between, e.g., the phase and particle number of a coherent wave (maybe not discussed because being a many particle system not relevant for perturbative calculations?). Just curious if that had any relevance in the gravitational wave context…
LikeLike
Interesting! I suspect it’s not super-relevant, because I doubt LIGO can see phases. But it should be replicable with the same formalism at least.
LikeLike
“to talk about some of the very cool implications of quantum mechanics.”
One big implication of quantum mechanics is that matter is not made of particles, rather fields, and particles are only ever approximations of fields whenever the fields are sufficiently localised and free. Once a ‘particle’ interacts with the quantum fields of a detector, as done in various experiments such as the double-slit experiment and the Stern-Gerlach experiment, the simple particle picture breaks down and one needs the full quantum field theory treatment of quantum mechanics.
LikeLike
My opinion is that false priors in the late 1800’s led to one or more misconceptions that then led to the fixation on the quantum and all the mysterious characteristics of quantum mechanics that are wrestled with to this day. I believe I can explain what has happened in simple sensible terms but only if 4gravitons says it’s ok. As a 2022 teaser, imagine indestructible point charges are the basis of everything and attracted pairs form Bohr-like orbitals 2-2-2 at vastly different energy scales for each of these three orbiting dipoles. Three orbital planes. How does this structure survive as the dominant structure in the universe, the Noether core that powers and balances all particles? Shielding. Simple wave superposition and cancellation. Locally some rather intense fields still emit and the next orbital is six point charges, and if you look at all the combinations, it pretty much explains 3/4 of the standard matter particles. May I continue?
LikeLike
4gravitons,
I find the evaluation of the theoretical uncertainty in classical/quantum theories very interesting, yet, to my knowledge, no rigorous calculation exists.
Let’s consider an electron that passes through a narrow slit and gets detected at a screen.
Classically, you could in principle predict the point of detection exactly if the position/momenta of all electrons/nuclei included in the experiment is known, together with the EM fields. This information cannot be obtained, even in principle, since the instrument you would use to perform the measurements is in an unknown state (microscopic state). Trying to measure the instrument itself does not work since you need another instrument and so on. How large is this uncertainty and how does it compare with the quantum one?
In QM there are two possible sources of uncertainty, a fundamental one and a practical one (similar to the above classical case). Both contribute to the Heisenberg formula. How much is each one? If the full quantum state of the system (electron+slit+screen) would be known, how much would the prediction improve over the “normal” situation where only the electron’s quantum state is known?
LikeLike
“In QM there are two possible sources of uncertainty, a fundamental one and a practical one (similar to the above classical case). Both contribute to the Heisenberg formula.”
You’re mistaken about the last sentence here, unless you mean something unusual by “Heisenberg formula”. In QM, the Heisenberg uncertainty principle only captures fundamental uncertainty. If you want to include practical measurement uncertainty, you have to add that on top.
As for how to add that on top, in either classical or quantum physics, there is of course a long tradition of estimating these things. It starts with the kinds of error estimation you learn to do as a physics undergraduate. For example, if you measure something with a ruler, you have an uncertainty in your measurement equal to the spacing between marks on the ruler. Every serious experiment in physics will have these kinds of estimates, typically with much more sophistication. They’ll take into account how precisely machined their equipment is, how pure their vacuum is, the temperature of their apparatus…then estimate, using currently known physics, how big of an effect these uncertainties can have on the results. If these uncertainties are larger than the effect they’re trying to observe, then they can’t conclude anything from their experiment, and they need an experiment with lower uncertainty.
(In particular, this means that for any experiment looking for quantum effects the classical error estimate will be smaller than the quantum effects they’re looking for…otherwise they wouldn’t do the experiment!)
For classical experiments (experiments on scales big enough that one doesn’t expect quantum effects), these uncertainties typically line up well with the “frequentist” uncertainty (do the experiment multiple times, see the spread in the results). Sometimes, this frequentist approach gets used to estimate uncertainties instead, sometimes it’s used as a double-check.
(Again, none of this is particularly relevant to the post, which has to do with people using quantum calculations to compute classical effects. In those calculations, you need to make sure that the quantum uncertainty vanishes, so that in the end you have only the practical uncertainty left, because the experiments they’re describing should not be observably quantum in character.)
LikeLike
I’ll try to better define the type of uncertainties I,m speaking about:
Fundamental uncertainty: it’s part of the physical law. This uncertainty remains even if you have complete knowledge of the present state. Classical physics does not involve such an uncertainty, some interpretations of QM do.
Theoretical uncertainty: not part of the physical law, but a direct consequence of the lack of information about the initial state. It’s the uncertainty that remains even if you have infinite time and resources to perform any experiment you want, but you are not all-knowing (you don’t know the initial state). I think classical physics does imply such an uncertainty for the reasons presented in my earlier post (you measure the system with an instrument which is itself in an unknown microscopic state). If you use a ruler, that ruler cannot be more accurate than the atomic size. You might be able to imagine a better way to measure a distance than a ruler, but you will always lack some required knowledge. As far as I can say, the theoretical uncertainty associated with classical physics is not known. A question as:
How accurately can you measure both position and momentum of an electron (assumed to be a classical particle) given infinite time and resources?
should allow for an exact answer, but we do not know that answer.
Practical uncertainty: this is the one you refer to, I agree it can be estimated and it is estimated in any experiment. I am not interested in this.
I get that the classical treatment of the black hole merging assumes complete knowledge of the initial state so there is no uncertainty involved. It is not obvious for me that Heisenberg’s uncertainty has to be fundamental. It might be caused, at least in part, by our lack of knowledge of the initial state. You cannot test this hypothesis directly since we are not all-knowing. I think that estimating the classical theoretical uncertainty could shed some light on this issue.
LikeLike
As far as I can tell your notion of theoretical uncertainty is just a limiting case of practical uncertainty: some notion of the “lowest possible” practical uncertainty. But in classical physics, there is no such “lowest possible”: while you are correct that you can never make a perfect measurement, in principle you can always make a measurement up to any given accuracy. To get some kind of finite minimum theoretical uncertainty, you need some way of introducing a limiting scale, and you don’t have one without the fundamental uncertainty of quantum mechanics.
(You could tie your minimum uncertainty to properties of human beings: we live only so long, our brains only hold so much information, etc. This would then vary from person to person and change when computers get involved. You could also try to get it from general relativity, from the problem that a sufficiently dense measurement device would collapse to a black hole. But there is no reason for a small measurement device to also be dense unless you invoke Heisenberg, at which point you’re circular.)
LikeLike
“in classical physics, there is no such “lowest possible”: while you are correct that you can never make a perfect measurement, in principle you can always make a measurement up to any given accuracy. To get some kind of finite minimum theoretical uncertainty, you need some way of introducing a limiting scale, and you don’t have one without the fundamental uncertainty of quantum mechanics.”
In order to measure both the position and momentum of a classical particle with an arbitrary accuracy you need to arbitrarily reduce the perturbation associated with the measurement. The only way to do that is to make the test particle arbitrarily light, so when applying Newton’s third law, your system changes momentum by some arbitrarily small amount.
The problem is that nature does not give us charged particles that are smaller than electrons, so we are stuck with using electrons to measure other electrons. It’s like using planets to measure the position of other planets. Very big disturbance. So, I’d say, the fact that electrons can’t be split in smaller charged particles imposes a limit to the accuracy to any classical measurement.
LikeLike
You can also measure electrons with photons, which can be arbitrarily soft.
LikeLike
“You can also measure electrons with photons, which can be arbitrarily soft.”
Sure, but after those low energy EM waves (there are no photons in classical physics) scatter from the electron you need to detect them to confirm that the scattering occurred. Detecting them requires an interaction between those scattered EM waves and the detector. And this is where the problem appears.
For reasons mentioned earlier (lack of knowledge of the exact microstate of the detector) you can only treat this detector statistically. If a detector is a very sensitive fluorescent screen, there will be a chance to get a signal even if there are no EM waves. An electron in a molecule can get enough energy just by interacting with the nearby molecules. In other words, there will be some noise which cannot be minimized below a certain level.
If the scattered EM waves have a very low energy the signal in the detector would be indistinguishable from noise. In order to increase the certainty of detection you need to increase the energy of the incident waves and so you need to perturb the electron more, increasing the uncertainty of its momentum.
So, classically, we will still have an uncertainty relation, but ħ/2 would be replaced with a different constant representing the lowest theoretical detector noise. I have no idea how to calculate that.
LikeLike
You can make the detector out of EM waves as well, though, theoretically speaking. From a theoretical perspective, the detector just has to be something that can respond to the phenomenon you’re trying to detect.
The only reason you in practice need to build a detector out of electrons and the like is because human senses have limited range and precision (and themselves use electrons to work), so you need to technologically amplify things. But that depends on specific properties of humans. So once again your theoretical minimum is subjective, based on the capabilities of the particular measurer you’re considering. You’re allowed to do that, of course, but you have to keep in mind that what you’re discovering will vary from person to person and entity to entity.
LikeLike
I don’t see how a detector can be made out of EM waves. Can you give me an example?
I don’t think the theoretical minimum would be subjective. The critical part is to get a suitable S/N ratio when the EM wave interacts with the detector. You can amplify this signal later and/or convert it so that any measurer can perceive it. The noise depends on how the energy is distributed between the electrons inside the detector, it’s an objective quantity, probably not much different from how the velocities the gas molecules are distributed in a gas at a certain temperature.
LikeLike
For example, the detector could be a simple EM wave with a specified wavevector. In the neighborhood of a charged particle the EM wave will be different from how it would be without a charge present, so the EM wave detects the presence of the particle. That’s already a detector in a theoretical sense, you don’t need electrons unless you want to amplify the effect for a measurer of a particular size.
LikeLike
Heh, seems more intuitively comprehensible than I thought 🙂
Hopefully the patterns that lead to everything cancelling out make some sense beyond “it just magically worked”.
LikeLike
Yeah, there’s a link to eikonal exponentiation and things like that, the patterns are definitely “because of something” though I don’t think the full story is known yet.
LikeLiked by 1 person