Tag Archives: quantum field theory

Made of Quarks Versus Made of Strings

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.

Assumptions for Naturalness

Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.

Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.

(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)

You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.

If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.

Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.

I’d like to state the naturalness argument as follows:

  1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
  2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
  3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
  4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.

Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.

(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)

Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.

Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.

Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.

Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.

The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.

In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.

This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.

One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.

There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?

Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.

If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.

If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.

This Week, at Scientific American

I’ve written an article for Scientific American! It went up online this week, the print versions go out on the 25th. The online version is titled “Loopy Particle Math”, the print one is “The Particle Code”, but they’re the same article.

For those who don’t subscribe to Scientific American, sorry about the paywall!

“The Particle Code” covers what will be familiar material to regulars on this blog. I introduce Feynman diagrams, and talk about the “amplitudeologists” who try to find ways around them. I focus on my corner of the amplitudes field, how the work of Goncharov, Spradlin, Vergu, and Volovich introduced us to “symbology”, a set of tricks for taking apart more complicated integrals (or “periods”) into simple logarithmic building blocks. I talk about how my collaborators and I use symbology, using these building blocks to compute amplitudes that would have been impossible with other techniques. Finally, I talk about the frontier of the field, the still-mysterious “elliptic polylogarithms” that are becoming increasingly well-understood.

(I don’t talk about the even more mysterious “Calabi-Yau polylogarithms“…another time for those!)

Working with Scientific American was a fun experience. I got to see how the professionals do things. They got me to clarify and explain, pointing out terms I needed to define and places I should pause to summarize. They took my rough gel-pen drawings and turned them into polished graphics. While I’m still a little miffed about them removing all the contractions, overall I learned a lot, and I think they did a great job of bringing the article to the printed page.

Why the Coupling Constants Aren’t Constant: Epistemology and Pragmatism

If you’ve heard a bit about physics, you might have heard that each of the fundamental forces (electromagnetism, the weak nuclear force, the strong nuclear force, and gravity) has a coupling constant, a number, handed down from nature itself, that determines how strong of a force it is. Maybe you’ve seen them in a table, like this:

tablefromhyperphysics

If you’ve heard a bit more about physics, though, you’ll have heard that those coupling constants aren’t actually constant! Instead, they vary with energy. Maybe you’ve seen them plotted like this:

phypub4highen

The usual way physicists explain this is in terms of quantum effects. We talk about “virtual particles”, and explain that any time particles and forces interact, these virtual particles can pop up, adding corrections that change with the energy of the interacting particles. The coupling constant includes all of these corrections, so it can’t be constant, it has to vary with energy.

renormalized-vertex

Maybe you’re happy with this explanation. But maybe you object:

“Isn’t there still a constant, though? If you ignore all the virtual particles, and drop all the corrections, isn’t there some constant number you’re correcting? Some sort of `bare coupling constant’ you could put into a nice table for me?”

There are two reasons I can’t do that. One is an epistemological reason, that comes from what we can and cannot know. The other is practical: even if I knew the bare coupling, most of the time I wouldn’t want to use it.

Let’s start with the epistemology:

The first thing to understand is that we can’t measure the bare coupling directly. When we measure the strength of forces, we’re always measuring the result of quantum corrections. We can’t “turn off” the virtual particles.

You could imagine measuring it indirectly, though. You’d measure the end result of all the corrections, then go back and calculate. That calculation would tell you how big the corrections were supposed to be, and you could subtract them off, solve the equation, and find the bare coupling.

And this would be a totally reasonable thing to do, except that when you go and try to calculate the quantum corrections, instead of something sensible, you get infinity.

We think that “infinity” is due to our ignorance: we know some of the quantum corrections, but not all of them, because we don’t have a final theory of nature. In order to calculate anything we need to hedge around that ignorance, with a trick called renormalization. I talk about that more in an older post. The key message to take away there is that in order to calculate anything we need to give up the hope of measuring certain bare constants, even “indirectly”. Once we fix a few constants that way, the rest of the theory gives reliable predictions.

So we can’t measure bare constants, and we can’t reason our way to them. We have to find the full coupling, with all the quantum corrections, and use that as our coupling constant.

Still, you might wonder, why does the coupling constant have to vary? Can’t I just pick one measurement, at one energy, and call that the constant?

This is where pragmatism comes in. You could fix your constant at some arbitrary energy, sure. But you’ll regret it.

In particle physics, we usually calculate in something called perturbation theory. Instead of calculating something exactly, we have to use approximations. We add up the approximations, order by order, expecting that each time the corrections will get smaller and smaller, so we get closer and closer to the truth.

And this works reasonably well if your coupling constant is small enough, provided it’s at the right energy.

If your coupling constant is at the wrong energy, then your quantum corrections will notice the difference. They won’t just be small numbers anymore. Instead, they end up containing logarithms of the ratio of energies. The more difference between your arbitrary energy scale and the correct one, the bigger these logarithms get.

This doesn’t make your calculation wrong, exactly. It makes your error estimate wrong. It means that your assumption that the next order is “small enough” isn’t actually true. You’d need to go to higher and higher orders to get a “good enough” answer, if you can get there at all.

Because of that, you don’t want to think about the coupling constants as actually constant. If we knew the final theory then maybe we’d know the true numbers, the ultimate bare coupling constants. But we still would want to use coupling constants that vary with energy for practical calculations. We’d still prefer the plot, and not just the table.

A Micrographia of Beastly Feynman Diagrams

Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?

I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.

We just needed to look at some more exotic beasts to figure it out.

tardigrade_eyeofscience_960

Like this guy!

Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.

even_loop_tardigrades

A clear resemblance!

There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.

The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.

This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.

We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.

Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.

The Amplitudes Assembly Line

In the amplitudes field, we calculate probabilities for particles to interact.

We’re trying to improve on the old-school way of doing this, a kind of standard assembly line. First, you define your theory, writing down something called a Lagrangian. Then you start drawing Feynman diagrams, starting with the simplest “tree” diagrams and moving on to more complicated “loops”. Using rules derived from your Lagrangian, you translate these Feynman diagrams into a set of integrals. Do the integrals, and finally you have your answer.

Our field is a big tent, with many different approaches. Despite that, a kind of standard picture has emerged. It’s not the best we can do, and it’s certainly not what everyone is doing. But it’s in the back of our minds, a default to compare against and improve on. It’s the amplitudes assembly line: an “industrial” process that takes raw assumptions and builds particle physics probabilities.

amplitudesassembly

  1. Start with some simple assumptions about your particles (what mass do they have? what is their spin?) and your theory (minimally, it should obey special relativity). Using that, find the simplest “trees”, involving only three particles: one particle splitting into two, or two particles merging into one.
  2. With the three-particle trees, you can now build up trees with any number of particles, using a technique called BCFW (named after its inventors, Ruth Britto, Freddy Cachazo, Bo Feng, and Edward Witten).
  3. Now that you’ve got trees with any number of particles, it’s time to get loops! As it turns out, you can stitch together your trees into loops, using a technique called generalized unitarity. To do this, you have to know what kinds of integrals are allowed to show up in your result, and a fair amount of effort in the field goes into figuring out a better “basis” of integrals.
  4. (Optional) Generalized unitarity will tell you which integrals you need to do, but those integrals may be related to each other. By understanding where these relations come from, you can reduce to a basis of fewer “master” integrals. You can also try to aim for integrals with particular special properties, quite a lot of effort goes in to improving this basis as well. The end goal is to make the final step as easy as possible:
  5. Do the integrals! If you just want to get a number out, you can use numerical methods. Otherwise, there’s a wide variety of choices available. Methods that use differential equations are probably the most popular right now, but I’m a fan of other options.

Some people work to improve one step in this process, making it as efficient as possible. Others skip one step, or all of them, replacing them with deeper ideas. Either way, the amplitudes assembly line is the background: our current industrial machine, churning out predictions.

Don’t Marry Your Arbitrary

This fall, I’m TAing a course on General Relativity. I haven’t taught in a while, so it’s been a good opportunity to reconnect with how students think.

This week, one problem left several students confused. The problem involved Christoffel symbols, the bane of many a physics grad student, but the trick that they had to use was in the end quite simple. It’s an example of a broader trick, a way of thinking about problems that comes up all across physics.

To see a simplified version of the problem, imagine you start with this sum:

g(j)=\Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Now, imagine you want to sum the function g(j) over j. You can write:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Let’s break this up into two terms, for later convenience:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{j=0}^n \Sigma_{i=0}^n f(j,i)

Without telling you anything about f(i,j), what do you know about this sum?

Well, one thing you know is that i and j are arbitrary.

i and j are letters you happened to use. You could have used different letters, x and y, or \alpha and \beta. You could even use different letters in each term, if you wanted to. You could even just pick one term, and swap i and j.

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{i=0}^n \Sigma_{j=0}^n f(i,j) = 0

And now, without knowing anything about f(i,j), you know that \Sigma_{j=0}^n g(j) is zero.

In physics, it’s extremely important to keep track of what could be really physical, and what is merely your arbitrary choice. In general relativity, your choice of polar versus spherical coordinates shouldn’t affect your calculation. In quantum field theory, your choice of gauge shouldn’t matter, and neither should your scheme for regularizing divergences.

Ideally, you’d do your calculation without making any of those arbitrary choices: no coordinates, no choice of gauge, no regularization scheme. In practice, sometimes you can do this, sometimes you can’t. When you can’t, you need to keep that arbitrariness in the back of your mind, and not get stuck assuming your choice was the only one. If you’re careful with arbitrariness, it can be one of the most powerful tools in physics. If you’re not, you can stare at a mess of Christoffel symbols for hours, and nobody wants that.