Tag Archives: quantum field theory

The Sum of Our Efforts

I got a new paper out last week, with Andrew McLeod, Henrik Munch, and Georgios Papathanasiou.

A while back, some collaborators and I found an interesting set of Feynman diagrams that we called “Omega”. These Omega diagrams were fun because they let us avoid one of the biggest limitations of particle physics: that we usually have to compute approximations, diagram by diagram, rather than finding an exact answer. For these Omegas, we figured out how to add all the infinite set of Omega diagrams up together, with no approximation.

One implication of this was that, in principle, we now knew the answer for each individual Omega diagram, far past what had been computed before. However, writing down these answers was easier said than done. After some wrangling, we got the answer for each diagram in terms of an infinite sum. But despite tinkering with it for a while, even our resident infinite sum expert Georgios Papathanasiou couldn’t quite sum them up.

Naturally, this made me think the sums would make a great Master’s project.

When Henrik Munch showed up looking for a project, Andrew McLeod and I gave him several options, but he settled on the infinite sums. Impressively, he ended up solving the problem in two different ways!

First, he found an old paper none of us had seen before, that gave a general method for solving that kind of infinite sum. When he realized that method was really annoying to program, he took the principle behind it, called telescoping, and came up with his own, simpler method, for our particular case.

Picture an old-timey folding telescope. It might be long when fully extended, but when you fold it up each piece fits inside the previous one, resulting in a much smaller object. Telescoping a sum has the same spirit. If each pair of terms in a sum “fit together” (if their difference is simple), you can rearrange them so that most of the difficulty “cancels out” and you’re left with a much simpler sum.

Henrik’s telescoping idea worked even better than expected. We found that we could do, not just the Omega sums, but other sums in particle physics as well. Infinite sums are a very well-studied field, so it was interesting to find something genuinely new.

The rest of us worked to generalize the result, to check the examples and to put it in context. But the core of the work was Henrik’s. I’m really proud of what he accomplished. If you’re looking for a PhD student, he’s on the market!

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

These Ain’t Democritus’s Particles

Physicists talk a lot about fundamental particles. But what do we mean by fundamental?

The Ancient Greek philosopher Democritus thought the world was composed of fundamental indivisible objects, constantly in motion. He called these objects “atoms”, and believed they could never be created or destroyed, with every other phenomenon explained by different types of interlocking atoms.

The things we call atoms today aren’t really like this, as you probably know. Atoms aren’t indivisible: their electrons can be split from their nuclei, and with more energy their nuclei can be split into protons and neutrons. More energy yet, and protons and neutrons can in turn be split into quarks. Still, at this point you might wonder: could quarks be Democritus’s atoms?

In a word, no. Nonetheless, quarks are, as far as we know, fundamental particles. As it turns out, our “fundamental” is very different from Democritus’s. Our fundamental particles can transform.

Think about beta decay. You might be used to thinking of it in terms of protons and neutrons: an unstable neutron decays, becoming a proton, an electron, and an (electron-anti-)neutrino. You might think that when the neutron decays, it literally “decays”, falling apart into smaller pieces.

But when you look at the quarks, the neutron’s smallest pieces, that isn’t the picture at all. In beta decay, a down quark in the neutron changes, turning into an up quark and an unstable W boson. The W boson then decays into an electron and a neutrino, while the up quark becomes part of the new proton. Even looking at the most fundamental particles we know, Democritus’s picture of unchanging atoms just isn’t true.

Could there be some even lower level of reality that works the way Democritus imagined? It’s not impossible. But the key insight of modern particle physics is that there doesn’t need to be.

As far as we know, up quarks and down quarks are both fundamental. Neither is “made of” the other, or “made of” anything else. But they also aren’t little round indestructible balls. They’re manifestations of quantum fields, “ripples” that slosh from one sort to another in complicated ways.

When we ask which particles are fundamental, we’re asking what quantum fields we need to describe reality. We’re asking for the simplest explanation, the simplest mathematical model, that’s consistent with everything we could observe. So “fundamental” doesn’t end up meaning indivisible, or unchanging. It’s fundamental like an axiom: used to derive the rest.

You Can’t Anticipate a Breakthrough

As a scientist, you’re surrounded by puzzles. For every test and every answer, ten new questions pop up. You can spend a lifetime on question after question, never getting bored.

But which questions matter? If you want to change the world, if you want to discover something deep, which questions should you focus on? And which should you ignore?

Last year, my collaborators and I completed a long, complicated project. We were calculating the chance fundamental particles bounce off each other in a toy model of nuclear forces, pushing to very high levels of precision. We managed to figure out a lot, but as always, there were many questions left unanswered in the end.

The deepest of these questions came from number theory. We had noticed surprising patterns in the numbers that showed up in our calculation, reminiscent of the fancifully-named Cosmic Galois Theory. Certain kinds of numbers never showed up, while others appeared again and again. In order to see these patterns, though, we needed an unusual fudge factor: an unexplained number multiplying our result. It was clear that there was some principle at work, a part of the physics intimately tied to particular types of numbers.

There were also questions that seemed less deep. In order to compute our result, we compared to predictions from other methods: specific situations where the question becomes simpler and there are other ways of calculating the answer. As we finished writing the paper, we realized we could do more with some of these predictions. There were situations we didn’t use that nonetheless simplified things, and more predictions that it looked like we could make. By the time we saw these, we were quite close to publishing, so most of us didn’t have the patience to follow these new leads. We just wanted to get the paper out.

At the time, I expected the new predictions would lead, at best, to more efficiency. Maybe we could have gotten our result faster, or cleaned it up a bit. They didn’t seem essential, and they didn’t seem deep.

Fast forward to this year, and some of my collaborators (specifically, Lance Dixon and Georgios Papathanasiou, along with Benjamin Basso) have a new paper up: “The Origin of the Six-Gluon Amplitude in Planar N=4 SYM”. The “origin” in their title refers to one of those situations: when the variables in the problem are small, and you’re close to the “origin” of a plot in those variables. But the paper also sheds light on the origin of our calculation’s mysterious “Cosmic Galois” behavior.

It turns out that the origin (of the plot) can be related to another situation, when the paths of two particles in our calculation almost line up. There, the calculation can be done with another method, called the Pentagon Operator Product Expansion, or POPE. By relating the two, Basso, Dixon, and Papathanasiou were able to predict not only how our result should have behaved near the origin, but how more complicated as-yet un-calculated results should behave.

The biggest surprise, though, lurked in the details. Building their predictions from the POPE method, they found their calculation separated into two pieces: one which described the physics of the particles involved, and a “normalization”. This normalization, predicted by the POPE method, involved some rather special numbers…the same as the “fudge factor” we had introduced earlier! Somehow, the POPE’s physics-based setup “knows” about Cosmic Galois Theory!

It seems that, by studying predictions in this specific situation, Basso, Dixon, and Papathanasiou have accomplished something much deeper: a strong hint of where our mysterious number patterns come from. It’s rather humbling to realize that, were I in their place, I never would have found this: I had assumed “the origin” was just a leftover detail, perhaps useful but not deep.

I’m still digesting their result. For now, it’s a reminder that I shouldn’t try to pre-judge questions. If you want to learn something deep, it isn’t enough to sit thinking about it, just focusing on that one problem. You have to follow every lead you have, work on every problem you can, do solid calculation after solid calculation. Sometimes, you’ll just make incremental progress, just fill in the details. But occasionally, you’ll have a breakthrough, something that justifies the whole adventure and opens your door to something strange and new. And I’m starting to think that when it comes to breakthroughs, that’s always been the only way there.

Why You Might Want to Bootstrap

A few weeks back, Quanta Magazine had an article about attempts to “bootstrap” the laws of physics, starting from simple physical principles and pulling out a full theory “by its own bootstraps”. This kind of work is a cornerstone of my field, a shared philosophy that motivates a lot of what we do. Building on deep older results, people in my field have found that just a few simple principles are enough to pick out specific physical theories.

There are limits to this. These principles pick out broad traits of theories: gravity versus the strong force versus the Higgs boson. As far as we know they don’t separate more closely related forces, like the strong nuclear force and the weak nuclear force. (Originally, the Quanta article accidentally made it sound like we know why there are four fundamental forces: we don’t, and the article’s phrasing was corrected.) More generally, a bootstrap method isn’t going to tell you which principles are the right ones. For any set of principles, you can always ask “why?”

With that in mind, why would you want to bootstrap?

First, it can make your life simpler. Those simple physical principles may be clear at the end, but they aren’t always obvious at the start of a calculation. If you don’t make good use of them, you might find you’re calculating many things that violate those principles, things that in the end all add up to zero. Bootstrapping can let you skip that part of the calculation, and sometimes go straight to the answer.

Second, it can suggest possibilities you hadn’t considered. Sometimes, your simple physical principles don’t select a unique theory. Some of the options will be theories you’ve heard of, but some might be theories that never would have come up, or even theories that are entirely new. Trying to understand the new theories, to see whether they make sense and are useful, can lead to discovering new principles as well.

Finally, even if you don’t know which principles are the right ones, some principles are better than others. If there is an ultimate theory that describes the real world, it can’t be logically inconsistent. That’s a start, but it’s quite a weak requirement. There are principles that aren’t required by logic itself, but that still seem important in making the world “make sense”. Often, we appreciate these principles only after we’ve seen them at work in the real world. The best example I can think of is relativity: while Newtonian mechanics is logically consistent, it requires a preferred reference frame, a fixed notion for which things are moving and which things are still. This seemed reasonable for a long time, but now that we understand relativity the idea of a preferred reference frame seems like it should have been obviously wrong. It introduces something arbitrary into the laws of the universe, a “why is it that way?” question that doesn’t have an answer. That doesn’t mean it’s logically inconsistent, or impossible, but it does make it suspect in a way other ideas aren’t. Part of the hope of these kinds of bootstrap methods is that they uncover principles like that, principles that aren’t mandatory but that are still in some sense “obvious”. Hopefully, enough principles like that really do specify the laws of physics. And if they don’t, we’ll at least have learned how to calculate better.

Calculating the Hard Way, for Science!

I had a new paper out last week, with Jacob Bourjaily and Matthias Volk. We’re calculating the probability that particles bounce off each other in our favorite toy model, N=4 super Yang-Mills. And this time, we’re doing it the hard way.

The “easy way” we didn’t take is one I have a lot of experience with. Almost as long as I’ve been writing this blog, I’ve been calculating these particle probabilities by “guesswork”: starting with a plausible answer, then honing it down until I can be confident it’s right. This might sound reckless, but it works remarkably well, letting us calculate things we could never have hoped for with other methods. The catch is that “guessing” is much easier when we know what we’re looking for: in particular, it works much better in toy models than in the real world.

Over the last few years, though, I’ve been using a much more “normal” method, one that so far has a better track record in the real world. This method, too, works better than you would expect, and we’ve managed some quite complicated calculations.

So we have an “easy way”, and a “hard way”. Which one is better? Is the hard way actually harder?

To test that, you need to do the same calculation both ways, and see which is easier. You want it to be a fair test: if “guessing” only works in the toy model, then you should do the “hard” version in the toy model as well. And you don’t want to give “guessing” any unfair advantages. In particular, the “guess” method works best when we know a lot about the result we’re looking for: what it’s made of, what symmetries it has. In order to do a fair test, we must use that knowledge to its fullest to improve the “hard way” as well.

We picked an example in the middle: not too easy, and not too hard, a calculation that was done a few years back “the easy way” but not yet done “the hard way”. We plugged in all the modern tricks we could, trying to use as much of what we knew as possible. We trained a grad student: Matthias Volk, who did the lion’s share of the calculation and learned a lot in the process. We worked through the calculation, and did it properly the hard way.

Which method won?

In the end, the hard way was indeed harder…but not by that much! Most of the calculation went quite smoothly, with only a few difficulties at the end. Just five years ago, when the calculation was done “the easy way”, I doubt anyone would have expected the hard way to be viable. But with modern tricks it wasn’t actually that hard.

This is encouraging. It tells us that the “hard way” has potential, that it’s almost good enough to compete at this kind of calculation. It tells us that the “easy way” is still quite powerful. And it reminds us that the more we know, and the more we apply our knowledge, the more we can do.

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

QCD and Reductionism: Stranger Than You’d Think

Earlier this year, I made a list of topics I wanted to understand. The most abstract and technical of them was something called “Wilsonian effective field theory”. I still don’t understand Wilsonian effective field theory. But while thinking about it, I noticed something that seemed weird. It’s something I think many physicists already understand, but that hasn’t really sunk in with the public yet.

There’s an old problem in particle physics, described in many different ways over the years. Take our theories and try to calculate some reasonable number (say, the angle an electron turns in a magnetic field), and instead of that reasonable number we get infinity. We fix this problem with a process called renormalization that hides that infinity away, changing the “normalization” of some constant like a mass or a charge. While renormalization first seemed like a shady trick, physicists eventually understood it better. First, we thought of it as a way to work around our ignorance, that the true final theory would have no infinities at all. Later, physicists instead thought about renormalization in terms of scaling.

Imagine looking at the world on a camera screen. You can zoom in, or zoom out. The further you zoom out, the more details you’ll miss: they’re just too small to be visible on your screen. You can guess what they might be, but your picture will be different depending on how you zoom.

In particle physics, many of our theories are like that camera. They come with a choice of “zoom setting”, a minimum scale where they still effectively tell the whole story. We call theories like these effective field theories. Some physicists argue that these are all we can ever have: since our experiments are never perfect, there will always be a scale so small we have no evidence about it.

In general, theories can be quite different at different scales. Some theories, though, are especially nice: they look almost the same as we zoom in to smaller scales. The only things that change are the mass of different particles, and their charges.

Trippy

One theory like this is Quantum Chromodynamics (or QCD), the theory of quarks and gluons. Zoom in, and the theory looks pretty much the same, with one crucial change: the force between particles get weaker. There’s a number, called the “coupling constant“, that describes how strong a force is, think of it as sort of like an electric charge. As you zoom in to quarks and gluons, you find you can still describe them with QCD, just with a smaller coupling constant. If you could zoom “all the way in”, the constant (and thus the force between particles) would be zero.

This makes QCD a rare kind of theory: one that could be complete to any scale. No matter how far you zoom in, QCD still “makes sense”. It never gives contradictions or nonsense results. That doesn’t mean it’s actually true: it interacts with other forces, like gravity, that don’t have complete theories, so it probably isn’t complete either. But if we didn’t have gravity or electricity or magnetism, if all we had were quarks and gluons, then QCD could have been the final theory that described them.

And this starts feeling a little weird, when you think about reductionism.

Philosophers define reductionism in many different ways. I won’t be that sophisticated. Instead, I’ll suggest the following naive definition: Reductionism is the claim that theories on larger scales reduce to theories on smaller scales.

Here “reduce to” is intentionally a bit vague. It might mean “are caused by” or “can be derived from” or “are explained by”. I’m gesturing at the sort of thing people mean when they say that biology reduces to chemistry, or chemistry to physics.

What happens when we think about QCD, with this intuition?

QCD on larger scales does indeed reduce to QCD on smaller scales. If you want to ask why QCD on some scale has some coupling constant, you can explain it by looking at the (smaller) QCD coupling constant on a smaller scale. If you have equations for QCD on a smaller scale, you can derive the right equations for a larger scale. In some sense, everything you observe in your larger-scale theory of QCD is caused by what happens in your smaller-scale theory of QCD.

But this isn’t quite the reductionism you’re used to. When we say biology reduces to chemistry, or chemistry reduces to physics, we’re thinking of just a few layers: one specific theory reduces to another specific theory. Here, we have an infinite number of layers, every point on the scale from large to small, each one explained by the next.

Maybe you think you can get out of this, by saying that everything should reduce to the smallest scale. But remember, the smaller the scale the smaller our “coupling constant”, and the weaker the forces between particles. At “the smallest scale”, the coupling constant is zero, and there is no force. It’s only when you put your hand on the zoom nob and start turning that the force starts to exist.

It’s reductionism, perhaps, but not as we know it.

Now that I understand this a bit better, I get some of the objections to my post about naturalness a while back. I was being too naive about this kind of thing, as some of the commenters (particularly Jacques Distler) noted. I believe there’s a way to rephrase the argument so that it still works, but I’d have to think harder about how.

I also get why I was uneasy about Sabine Hossenfelder’s FQXi essay on reductionism. She considered a more complicated case, where the chain from large to small scale could be broken, a more elaborate variant of a problem in Quantum Electrodynamics. But if I’m right here, then it’s not clear that scaling in effective field theories is even the right way to think about this. When you have an infinite series of theories that reduce to other theories, you’re pretty far removed from what most people mean by reductionism.

Finally, this is the clearest reason I can find why you can’t do science without an observer. The “zoom” is just a choice we scientists make, an arbitrary scale describing our ignorance. But without it, there’s no way to describe QCD. The notion of scale is an inherent and inextricable part of the theory, and it doesn’t have to mean our theory is incomplete.

Experts, please chime in here if I’m wrong on the physics here. As I mentioned at the beginning, I still don’t think I understand Wilsonian effective field theory. If I’m right though, this seems genuinely weird, and something more of the public should appreciate.