A reductionist would say that the behavior of the big is due to the behavior of the small. Big things are made up of small things, so anything the big things do must be explicable in terms of what the small things are doing. It may be very hard to explain things this way: you wouldn’t want to describe the economy in terms of motion of carbon atoms. But in principle, if you could calculate everything, you’d find the small things are enough: there are no fundamental “new rules” that only apply to big things.
A physicist reductionist would have to amend this story. Zoom in far enough, and it doesn’t really make sense to talk about “small things”, “big things”, or even “things” at all. The world is governed by interactions of quantum fields, ripples spreading and colliding and changing form. Some of these ripples (like the ones we call “protons”) are made up of smaller things…but ultimately most aren’t. They just are what they are.
Still, a physicist can rescue the idea of reductionism by thinking about renormalization. If you’ve heard of renormalization, you probably think of it as a trick physicists use to hide inconvenient infinite results in their calculations. But an arguably better way to think about it is as a kind of “zoom” dial for quantum field theories. Starting with a theory, we can use renormalization to “zoom out”, ignoring the smallest details and seeing what picture emerges. As we “zoom”, different forces will seem to get stronger or weaker: electromagnetism matters less when we zoom out, the strong nuclear force matters more.
(Why then, is electromagnetism so much more important in everyday life? The strong force gets so strong as we zoom out that we stop seeing individual particles, and only see them bound into protons and neutrons. Electromagnetism is like this too, binding electrons and protons into neutral atoms. In both cases, it can be better, once we’ve zoomed out, to use a new description: you don’t want to do chemistry keeping track of the quarks and gluons.)
A physicists reductionist then, would expect renormalization to always go “one way”. As we “zoom out”, we should find that our theories in a meaningful sense get simpler and simpler. Maybe they’re still hard to work with: it’s easier to think about gluons and quarks when zoomed in than the zoo of different nuclear particles we need to consider when zoomed out. But at each step, we’re ignoring some details. And if you’re a reductionist, you shouldn’t expect “zooming out” to show you anything truly fundamentally new.
Can you prove that, though?
Surprisingly, yes!
In 2011, Zohar Komargodski and Adam Schwimmer proved a result called the a-theorem. “The a-theorem” is probably the least google-able theorem in the universe, which has probably made it hard to popularize. It is named after a quantity, labeled “a”, that gives a particular way to add up energy in a quantum field theory. Komargodski and Schwimmer proved that, when you do the renormalization procedure and “zoom out”, then this quantity “a” will always get smaller.
Why does this say anything about reductionism?
Suppose you have a theory that violates reductionism. You zoom out, and see something genuinely new: a fact about big things that isn’t due to facts about small things. If you had a theory like that, then you could imagine “zooming in” again, and using your new fact about big things to predict something about the small things that you couldn’t before. You’d find that renormalization doesn’t just go “one way”: with new facts able to show up at every scale, zooming out isn’t necessarily ignoring more and zooming in isn’t necessarily ignoring less. It would depend on the situation which way the renormalization procedure would go.
The a-theorem puts a stop to this. It says that, when you “zoom out”, there is a number that always gets smaller. In some ways it doesn’t matter what that number is (as long as you’re not cheating and using the scale directly). In this case, it is a number that loosely counts “how much is going on” in a given space. And because it always decreases when you do renormalization, it means that renormalization can never “go backwards”. You can never renormalize back from your “zoomed out” theory to the “zoomed in” one.
The a-theorem, like every theorem, is based on assumptions. Here, the assumptions are mostly that quantum field theory works in the normal way, that the theory we’re dealing with is not a totally new type of theory instead. One assumption I find interesting is the assumption of locality, that no signals can travel faster than the speed of light. On a naive level, this makes a lot of sense to me. If you can send signals faster than light, then you can’t control your “zoom lens”. Physics in a small area might be changed by something happening very far away, so you can’t “zoom in” in a way that lets you keep including everything that could possibly be relevant. If you have signals that go faster than light, you could transmit information between different parts of big things without them having to “go through” small things first. You’d screw up reductionism, and have surprises show up on every scale.
Personally, I find it really cool that it’s possible to prove a theorem that says something about a seemingly philosophical topic like reductionism. Even with assumptions (and even with the above speculations about the speed of light), it’s quite interesting that one can say anything at all about this kind of thing from a physics perspective. I hope you find it interesting too!

Okay, this is pretty far beyond me.
You are saying reductionism is rescued, right? Theoretically at least even if it is impossible practically.
But based on the paper, does this only work in 4 dimensions?
LikeLike
Yeah, my takeaway at least is that reductionism is rescued (though I may be reading too much into it 😉 )
The paper applies to 4 dimensions (3 space+1 time). There’s an older result proving something similar in 2 dimensions (1 space+1 time). (I have the vague impression that one doesn’t need to demand locality either, but I’m not sure I’m remembering right.) It hasn’t been proven for 3 dimensions, or anything higher than 4, yet.
LikeLiked by 1 person
The debate seems somewhat academic since it would be impossible to derive any significant level of complexity by quantum behaviors. So, pragmatically reductionism is a dead end even if it is correct.
Adam Frank references a book by Laughlin:
Robert Laughlin, also a condensed matter physicist, wrote a book called A Different Universe, in which he argued that attempts to apply the fundamental equations of quantum mechanics to any system with more than 100 particles leaves you with something that can only be solved with God’s computer (i.e., it can’t really be solved).
https://bigthink.com/13-8/condensed-matter-physicists-reject-reductionism/
LikeLike
I don’t know what the state of the art in quantum many-body simulations is, but 100 particles seems low to me. Not impossibly low, but I’m guessing that there are methods these days that have pushed that number a bit.
And for calculations that can’t be done explicitly or in simulation, one can often still systematically coarse-grain. To some extent, this is precisely how the condensed matter field works: take a system, make an educated guess for what properties matter, and then model the system just taking those properties into account. Instead of electrons and nuclei, treat molecules as dipoles. Instead of modeling the atoms in every cell, you model the cells as “active matter”, darting forward and randomly turning. I don’t really see that kind of thing as a repudiation of reductionism, since every picture like this is justified by what people think the lower-level physics is, they’re just simplifying things out to make it computable, so that you see the interesting complexities and ignore the irrelevant. (There’s something subjective and intuitive here of course, as there is every time we humans try to reason about the world.)
Since you usually can’t literally run a simulation down to the smallest details, I get that it may seem beside the point. But it matters: the fact that the lowest levels ground the other ones mean that you can reason about the higher levels based on your intuition about the lower ones, ruling out some theories because they’re not reasonable based on what you already know. You can’t do that with the lowest level, which is an extra challenge to fundamental physics that isn’t present in any other science.
LikeLiked by 1 person
What would be the lowest level? Two quarks or something even lower? Would you need at least two of something but no more than n of something to be irreducible?
Any speculations?
LikeLike
If you’re thinking about individual quarks, you’re not thinking on the lowest level. The lowest level is field theories, laws of the form “if x happens then y”, with some coarse-graining choice where you’re ignoring what happens beneath a certain scale. Individual quarks come out of the laws, and their total is conserved as far as we know at the moment, but “two quarks” is not the kind of level of analysis you want for fundamental physics.
LikeLiked by 1 person
It has been proven in 3 dimensions also, it’s called the F-theorem. In 2 dimensions you also need locality.
LikeLiked by 2 people
Thanks! Somehow I found a bunch of old papers saying the F-theorem was a conjecture and missed the more recent ones.
LikeLike
Hmm, but I once heard someone say that RG flows can have limit cycles, which seem to contradict the philosophical picture you’re going for here. I’m curious how that’s compatible with the a theorem. For example see:
https://www.sciencedirect.com/science/article/abs/pii/S055032130400625X?via%3Dihub
LikeLike
So, that paper was before the a-theorem proof, but after the c-theorem. They seem to be arguing that the limit cycle thing is not visible from the perspective of the original c, but from some other c_effective. I don’t know whether later literature explained the situation, but they sound a little confused by it so I would assume someone figured out more later.
LikeLike