Back when Numberphile’s silly video about the zeta function came up, I wrote a post explaining the process of regularization, where physicists take an incorrect infinite result and patch it over to get something finite. At the end of that post I mentioned a particular variant of regularization, called renormalization, which was especially important in quantum field theory.
Renormalization has to do with how we do calculations and make predictions in particle physics. If you haven’t read my post “What’s so hard about Quantum Field Theory anyway?” you should read it before trying to tackle this one. The important concepts there are that probabilities in particle physics are calculated using Feynman Diagrams, that those diagrams consist of lines representing particles and points representing the ways they interact, that each line and point in the diagram gives a number that must be plugged in to the calculation, and that to do the full calculation you have to add up all the possible diagrams you can draw.
Let’s say you’re interested in finding out the mass of a particle. How about the Higgs?
You can’t weigh it, or otherwise see how gravity affects it: it’s much too light, and decays into other particles much too fast. Luckily, there is another way. As I mentioned in this post, a particle’s mass and its kinetic energy (energy of motion) both contribute to its total energy, which in turn affects what particles it can turn into if it decays. So if you want to find a particle’s mass, you need the relationship between its motion and its energy.
Suppose we’ve got a Higgs particle moving along. We know it was created out of some collision, and we know what it decays into at the end. With that, we can figure out its mass.
There’s a problem here, though: we only know what happens at the beginning and the end of this diagram. We can’t be certain what happens in the middle. That means we need to add in all of the other diagrams, every possible diagram with that beginning and that end.
Just to look at one example, suppose the Higgs particle splits into a quark and an anti-quark (the antimatter version of the quark). If they come back together later into a Higgs, the process would look the same from the outside. Here’s the diagram for it:
When we’re “measuring the Higgs mass”, what we’re actually measuring is the sum of every single diagram that begins with the creation of a Higgs and ends with it decaying.
Surprisingly, that’s not the problem!
The problem comes when you try to calculate the number that comes out of that diagram, when the Higgs splits into a quark-antiquark pair. According to the rules of quantum field theory, those quarks don’t have to obey the normal relationship between total energy, kinetic energy, and mass. They can have any kinetic energy at all, from zero all the way up to infinity. And because it’s quantum field theory, you have to add up all of those possible kinetic energies, all the way up. In this case, the diagram actually gives you infinity.
(Note that not every diagram with unlimited kinetic energy is going to be infinite. The first time theorists calculated infinite diagrams, they were surprised.
For those of you who know calculus, the problem here comes after you integrate over momentum. The two quarks each give a factor of one over the momentum, and then you integrate the result four times (for three dimensions of space plus time), which gives an infinite result. If you had different particles arranged in a different way you might divide by more factors of momentum and get a finite value.)
The modern understanding of infinite results like this is that they arise from our ignorance. The mass of the Higgs isn’t actually infinity, because we can’t just add up every kinetic energy up to infinity. Instead, at some point before we get to infinity “something else” happens.
We don’t know what that “something else” is. It might be supersymmetry, it might be something else altogether. Whatever it is, we don’t know enough about it now to include it in the calculations as anything more than a cutoff, a point beyond which “something” happens. A theory with a cutoff like this, one that is only “effective” below a certain energy, is called an Effective Field Theory.
While we don’t know what happens at higher energies, we still need a way to complete our calculations if we want to use them in the real world. That’s where renormalization comes in.
When we use renormalization, we bring in experimental observations. We know that, no matter what is contributing to the Higgs particle’s mass, what we observe in the real world is finite. “Something” must be canceling the divergence, so we simply assume that “something” does, and that the final result agrees with the experiment!
In order to do this, we accepted the experimental result for the mass of the Higgs. That means that we’ve lost any ability to predict the mass from our theory. This is a general rule for renormalization: we trade ignorance (of the “something” that happens at high energy) for a loss of predictability.
If we had to do this for every calculation, we couldn’t predict anything at all. Luckily, for many theories (called renormalizable theories) there are theorems proving that you only need to do this a few times to fix the entire theory. You give up the ability to predict the results of a few experiments, but you gain the ability to predict the rest.
Luckily for us, the Standard Model is a renormalizable theory. Unfortunately, some important theories are not. In particular, quantum gravity is non-renormalizable. In order to fix the infinities in quantum gravity, you need to do the renormalization trick an infinite number of times, losing an infinite amount of predictability. Thus, while making a theory of quantum gravity is not difficult in principle, in practice the most obvious way to create the theory results in a “theory” that can never make any predictions.
One of the biggest virtues of string theory (some would say its greatest virtue) is that these infinities never appear. You never need to renormalize string theory in this way, which is what lets it work as a theory of quantum gravity. N=8 supergravity, the gravity cousin of N=4 super Yang-Mills, might also have this handy property, which is why many people are so eager to study it.
If a theory “predicts” infinity instead of, say, zero, doesn’t it mean the theory is badly formulated? Any theory is incomplete, but it does not mean it should fail automatically in this “catastrophic” way.
The quirky thing about high-energy physics is that just being incomplete is often enough to cause this sort of “catastrophic” failure. Because you’re adding up every possible energy up to infinity, if you aren’t taking into account absolutely everything that can cancel then you run a high risk of getting (false) infinite results. When you do complete things in a nice way (supersymmetry, for example), then the infinities go away.
The thing to keep in mind here is that “badly formulated” is subjective. Dirac certainly considered renormalization to be “badly formulated”…on the other hand, is a theory really badly formulated if it makes detailed predictions about the world, predictions that go beyond the information that was fed in?
Many people would argue that string theory is the “well formulated” theory that completes “badly formulated” quantum field theory, since it avoids problems like this.
“being incomplete is often enough to cause this sort of “catastrophic” failure” – it is not proven! These are completely different things. Your referring to “adding up every possible energy up to infinity”, that you consider as an evident proof, is in fact a proof of the opposite case. High energy modes, no matter what they are, are frozen and cannot physically be involved into the calculations. So when in some theory they are involved in a catastrophic way, it is a badness of theory formulation.
“that you consider as an evident proof” : you might have missed it, but this is a blog about explaining science to the general public, not a scientific publication. What I described is how the vast majority of people do quantum field theory. If (as your wordpress suggests) you have an idea for a different prescription, it’s on you to do the work to establish its utility and promulgate it in the scientific community. Only when something is sufficiently common and accepted is it an appropriate subject for a popularization/outreach-focused blog like this one.
I’m sure if your idea is good you’ll get there eventually. 😉
I did not get your point. I may not discuss your statements with you? I may not learn your opinion about rigor of your statements? What do you propose?
I was making two points:
First, since this blog is designed for physics popularization, most of the people who post comments are laymen. The style of your first post was very much along those lines, asking for clarification, so I was assuming that was the kind of person I was dealing with. When you turn around and start debating your own ideas it comes off as deceptive, like a parent sneaking in to a biology class to argue with the teacher about evolution. Regardless of your intent, it’s not a good start.
Second, if you’re actually working on a viable alternative to renormalization then I’m simply not the right person to talk to. As you should be well aware, people in physics are quite specialized these days. I’m not going to be able to evaluate legitimately groundbreaking work in someone else’s subfield, so if you’re confident that you’re making meaningful progress then I’m certainly not the right person to talk to, especially in the context of the comments section of my blog.
Whatever I do in my own blog, in your blog I was asking questions about your statements solely. I was really interested to learn how you convinced yourself in accepting the renormalization business.
Are you sure that the cutoff represents new physics? In condensed matter, it corresponds to a physical lattice, but in a relativistic QFT, we want to have Lorentz invariance, so in the end we have to control the limit where the cutoff goes to infinity.
My present view is that the cutoff is a mathematical trick which is needed to turn the quantum fields, which are operator valued distributions, into smoother objects.
In high energy, people usually expect the cutoff to correspond to new physics, but I don’t think anyone’s sure in general. In particular, I’m not familiar enough with the literature on operator-valued distributions to comment on that possibility.
LikeLiked by 1 person
Pingback: Is Everything Connected? Not really… | The Drinking Water Advisor
Pingback: Superspace – Life in Physics…