Tag Archives: quarks

Congratulations to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi!

The 2021 Nobel Prize in Physics was announced this week, awarded to Syukuro Manabe and Klaus Hasselmann for climate modeling and Giorgio Parisi for understanding a variety of complex physical systems.

Before this year’s prize was announced, I remember a few “water cooler chats” about who might win. No guess came close, though. The Nobel committee seems to have settled in to a strategy of prizes on a loosely linked “basket” of topics, with half the prize going to a prominent theorist and the other half going to two experimental, observational, or (in this case) computational physicists. It’s still unclear why they’re doing this, but regardless it makes it hard to predict what they’ll do next!

When I read the announcement, my first reaction was, “surely it’s not that Parisi?” Giorgio Parisi is known in my field for the Altarelli-Parisi equations (more properly known as the DGLAP equations, the longer acronym because, as is often the case in physics, the Soviets got there first). These equations are in some sense why the scattering amplitudes I study are ever useful at all. I calculate collisions of individual fundamental particles, like quarks and gluons, but a real particle collider like the LHC collides protons. Protons are messy, interacting combinations of quarks and gluons. When they collide you need not merely the equations describing colliding quarks and gluons, but those that describe their messy dynamics inside the proton, and in particular how those dynamics look different for experiments with different energies. The equation that describes that is the DGLAP equation.

As it turns out, Parisi is known for a lot more than the DGLAP equation. He is best known for his work on “spin glasses”, models of materials where quantum spins try to line up with each other, never quite settling down. He also worked on a variety of other complex systems, including flocks of birds!

I don’t know as much about Manabe and Hasselmann’s work. I’ve only seen a few talks on the details of climate modeling. I’ve seen plenty of talks on other types of computer modeling, though, from people who model stars, galaxies, or black holes. And from those, I can appreciate what Manabe and Hasselmann did. Based on those talks, I recognize the importance of those first one-dimensional models, a single column of air, especially back in the 60’s when computer power was limited. Even more, I recognize how impressive it is for someone to stay on the forefront of that kind of field, upgrading models for forty years to stay relevant into the 2000’s, as Manabe did. Those talks also taught me about the challenge of coupling different scales: how small effects in churning fluids can add up and affect the simulation, and how hard it is to model different scales at once. To use these effects to discover which models are reliable, as Hasselmann did, is a major accomplishment.

Alice Through the Parity Glass

When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.

What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.

I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?

Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.

Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.

All I need, then, is a person who cares a lot about the world behind a mirror.

Curiouser and curiouser…

When Alice goes through the looking glass in the novel of that name, she enters a world flipped left-to-right, a world with its parity inverted. Following Alice, we have a natural opportunity to explore such a world. Others have used this to explore parity symmetry in biology: for example, a side-plot in Alan Moore’s League of Extraordinary Gentlemen sees Alice come back flipped, and starve when she can’t process mirror-reversed nutrients. I haven’t seen it explored for physics, though.

In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.

The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.

Does this work?

A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.

Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.

The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.

Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.

Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.

It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.

I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.

If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.

So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!

Electromagnetism Is the Weirdest Force

For a long time, physicists only knew about two fundamental forces: electromagnetism, and gravity. Physics students follow the same path, studying Newtonian gravity, then E&M, and only later learning about the other fundamental forces. If you’ve just recently heard about the weak nuclear force and the strong nuclear force, it can be tempting to think of them as just slight tweaks on electromagnetism. But while that can be a helpful way to start, in a way it’s precisely backwards. Electromagnetism is simpler than the other forces, that’s true. But because of that simplicity, it’s actually pretty weird as a force.

The weirdness of electromagnetism boils down to one key reason: the electromagnetic field has no charge.

Maybe that sounds weird to you: if you’ve done anything with electromagnetism, you’ve certainly seen charges. But while you’ve calculated the field produced by a charge, the field itself has no charge. You can specify the positions of some electrons and not have to worry that the electric field will introduce new charges you didn’t plan. Mathematically, this means your equations are linear in the field, and thus not all that hard to solve.

The other forces are different. The strong nuclear force has three types of charge, dubbed red, green, and blue. Not just quarks, but the field itself has charges under this system, making the equations that describe it non-linear.

A depiction of a singlet state

Those properties mean that you can’t just think of the strong force as a push or pull between charges, like you could with electromagnetism. The strong force doesn’t just move quarks around, it can change their color, exchanging charge between the quark and the field. That’s one reason why when we’re more careful we refer to it as not the strong force, but the strong interaction.

The weak force also makes more sense when thought of as an interaction. It can change even more properties of particles, turning different flavors of quarks and leptons into each other, resulting in among other phenomena nuclear beta decay. It would be even more like the strong force, but the Higgs field screws that up, stirring together two more fundamental forces and spitting out the weak force and electromagnetism. The result ties them together in weird ways: for example, it means that the weak field can actually have an electric charge.

Interactions like the strong and weak forces are much more “normal” for particle physicists: if you ask us to picture a random fundamental force, chances are it will look like them. It won’t typically look like electromagnetism, the weird “degenerate” case with a field that doesn’t even have a charge. So despite how familiar electromagnetism may be to you, don’t take it as your model of what a fundamental force should look like: of all the forces, it’s the simplest and weirdest.

Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

The view of the campus from wonder.me

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

QCD and Reductionism: Stranger Than You’d Think

Earlier this year, I made a list of topics I wanted to understand. The most abstract and technical of them was something called “Wilsonian effective field theory”. I still don’t understand Wilsonian effective field theory. But while thinking about it, I noticed something that seemed weird. It’s something I think many physicists already understand, but that hasn’t really sunk in with the public yet.

There’s an old problem in particle physics, described in many different ways over the years. Take our theories and try to calculate some reasonable number (say, the angle an electron turns in a magnetic field), and instead of that reasonable number we get infinity. We fix this problem with a process called renormalization that hides that infinity away, changing the “normalization” of some constant like a mass or a charge. While renormalization first seemed like a shady trick, physicists eventually understood it better. First, we thought of it as a way to work around our ignorance, that the true final theory would have no infinities at all. Later, physicists instead thought about renormalization in terms of scaling.

Imagine looking at the world on a camera screen. You can zoom in, or zoom out. The further you zoom out, the more details you’ll miss: they’re just too small to be visible on your screen. You can guess what they might be, but your picture will be different depending on how you zoom.

In particle physics, many of our theories are like that camera. They come with a choice of “zoom setting”, a minimum scale where they still effectively tell the whole story. We call theories like these effective field theories. Some physicists argue that these are all we can ever have: since our experiments are never perfect, there will always be a scale so small we have no evidence about it.

In general, theories can be quite different at different scales. Some theories, though, are especially nice: they look almost the same as we zoom in to smaller scales. The only things that change are the mass of different particles, and their charges.

Trippy

One theory like this is Quantum Chromodynamics (or QCD), the theory of quarks and gluons. Zoom in, and the theory looks pretty much the same, with one crucial change: the force between particles get weaker. There’s a number, called the “coupling constant“, that describes how strong a force is, think of it as sort of like an electric charge. As you zoom in to quarks and gluons, you find you can still describe them with QCD, just with a smaller coupling constant. If you could zoom “all the way in”, the constant (and thus the force between particles) would be zero.

This makes QCD a rare kind of theory: one that could be complete to any scale. No matter how far you zoom in, QCD still “makes sense”. It never gives contradictions or nonsense results. That doesn’t mean it’s actually true: it interacts with other forces, like gravity, that don’t have complete theories, so it probably isn’t complete either. But if we didn’t have gravity or electricity or magnetism, if all we had were quarks and gluons, then QCD could have been the final theory that described them.

And this starts feeling a little weird, when you think about reductionism.

Philosophers define reductionism in many different ways. I won’t be that sophisticated. Instead, I’ll suggest the following naive definition: Reductionism is the claim that theories on larger scales reduce to theories on smaller scales.

Here “reduce to” is intentionally a bit vague. It might mean “are caused by” or “can be derived from” or “are explained by”. I’m gesturing at the sort of thing people mean when they say that biology reduces to chemistry, or chemistry to physics.

What happens when we think about QCD, with this intuition?

QCD on larger scales does indeed reduce to QCD on smaller scales. If you want to ask why QCD on some scale has some coupling constant, you can explain it by looking at the (smaller) QCD coupling constant on a smaller scale. If you have equations for QCD on a smaller scale, you can derive the right equations for a larger scale. In some sense, everything you observe in your larger-scale theory of QCD is caused by what happens in your smaller-scale theory of QCD.

But this isn’t quite the reductionism you’re used to. When we say biology reduces to chemistry, or chemistry reduces to physics, we’re thinking of just a few layers: one specific theory reduces to another specific theory. Here, we have an infinite number of layers, every point on the scale from large to small, each one explained by the next.

Maybe you think you can get out of this, by saying that everything should reduce to the smallest scale. But remember, the smaller the scale the smaller our “coupling constant”, and the weaker the forces between particles. At “the smallest scale”, the coupling constant is zero, and there is no force. It’s only when you put your hand on the zoom nob and start turning that the force starts to exist.

It’s reductionism, perhaps, but not as we know it.

Now that I understand this a bit better, I get some of the objections to my post about naturalness a while back. I was being too naive about this kind of thing, as some of the commenters (particularly Jacques Distler) noted. I believe there’s a way to rephrase the argument so that it still works, but I’d have to think harder about how.

I also get why I was uneasy about Sabine Hossenfelder’s FQXi essay on reductionism. She considered a more complicated case, where the chain from large to small scale could be broken, a more elaborate variant of a problem in Quantum Electrodynamics. But if I’m right here, then it’s not clear that scaling in effective field theories is even the right way to think about this. When you have an infinite series of theories that reduce to other theories, you’re pretty far removed from what most people mean by reductionism.

Finally, this is the clearest reason I can find why you can’t do science without an observer. The “zoom” is just a choice we scientists make, an arbitrary scale describing our ignorance. But without it, there’s no way to describe QCD. The notion of scale is an inherent and inextricable part of the theory, and it doesn’t have to mean our theory is incomplete.

Experts, please chime in here if I’m wrong on the physics here. As I mentioned at the beginning, I still don’t think I understand Wilsonian effective field theory. If I’m right though, this seems genuinely weird, and something more of the public should appreciate.

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!