Category Archives: Amateur Philosophy

Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

Inevitably Arbitrary

Physics is universal…or at least, it aspires to be. Drop an apple anywhere on Earth, at any point in history, and it will accelerate at roughly the same rate. When we call something a law of physics, we expect it to hold everywhere in the universe. It shouldn’t depend on anything arbitrary.

Sometimes, though, something arbitrary manages to sneak in. Even if the laws of physics are universal, the questions we want to answer are not: they depend on our situation, on what we want to know.

The simplest example is when we have to use units. The mass of an electron is the same here as it is on Alpha Centauri, the same now as it was when the first galaxies formed. But what is that mass? We could write it as 9.1093837015×10−31 kilograms, if we wanted to, but kilograms aren’t exactly universal. Their modern definition is at least based on physical constants, but with some pretty arbitrary numbers. It defines the Planck constant as 6.62607015×10−34 Joule-seconds. Chase that number back, and you’ll find references to the Earth’s circumference and the time it takes to turn round on its axis. The mass of the electron may be the same on Alpha Centauri, but they’d never write it as 9.1093837015×10−31 kilograms.

Units aren’t the only time physics includes something arbitrary. Sometimes, like with units, we make a choice of how we measure or calculate something. We choose coordinates for a plot, a reference frame for relativity, a zero for potential energy, a gauge for gauge theories and regularization and subtraction schemes for quantum field theory. Sometimes, the choice we make is instead what we measure. To do thermodynamics we must choose what we mean by a state, to call two substances water even if their atoms are in different places. Some argue a perspective like this is the best way to think about quantum mechanics. In a different context, I’d argue it’s why we say coupling constants vary with energy.

So what do we do, when something arbitrary sneaks in? We have a few options. I’ll illustrate each with the mass of the electron:

  • Make an arbitrary choice, and stick with it: There’s nothing wrong with measuring an electron in kilograms, if you’re consistent about it. You could even use ounces. You just have to make sure that everyone else you compare with is using the same units, or be careful to convert.
  • Make a “natural” choice: Why not set the speed of light and Planck’s constant to one? They come up a lot in particle physics, and all they do is convert between length and time, or time and energy. That way you can use the same units for all of them, and use something convenient, like electron-Volts. They even have electron in the name! Of course they also have “Volt” in the name, and Volts are as arbitrary as any other metric unit. A “natural” choice might make your life easier, but you should always remember it’s still arbitrary.
  • Make an efficient choice: This isn’t always the same as the “natural” choice. The units you choose have an effect on how difficult your calculation is. Sometimes, the best choice for the mass of an electron is “one electron-mass”, because it lets you calculate something else more easily. This is easier to illustrate with other choices: for example, if you have to pick a reference frame for a collision, picking one in which one of the objects is at rest, or where they move symmetrically, might make your job easier.
  • Stick to questions that aren’t arbitrary: No matter what units we use, the electron’s mass will be arbitrary. Its ratios to other masses won’t be though. No matter where we measure, dimensionless ratios like the mass of the muon divided by the mass of the electron, or the mass of the electron divided by the value of the Higgs field, will be the same. If we can make sure to ask only this kind of question, we can avoid arbitrariness. Note that we can think of even a mass in “kilograms” as this kind of question: what’s the ratio of the mass of the electron to “this arbitrary thing we’ve chosen”? In practice though, you want to compare things in the same theory, without the historical baggage of metric.

This problem may seem silly, and if we just cared about units it might be. But at the cutting-edge of physics there are still areas where the arbitrary shows up. Our choices of how to handle it, or how to avoid it, can be crucial to further progress.

Science as Hermeneutics: Closer Than You’d Think

This post is once again inspired by a Ted Chiang short story. This time, it’s “The Evolution of Human Science”, which imagines a world in which super-intelligent “metahumans” have become incomprehensible to the ordinary humans they’ve left behind. Human scientists in that world practice “hermeneutics“: instead of original research, they try to interpret what the metahumans are doing, reverse-engineering their devices and observing their experiments.

Much like a blogger who, out of ideas, cribs them from books.

It’s a thought-provoking view of what science in the distant future could become. But it’s also oddly familiar.

You might think I’m talking about machine learning here. It’s true that in recent years people have started using machine learning in science, with occasionally mysterious results. There are even a few cases of physicists using machine-learning to suggest some property, say of Calabi-Yau manifolds, and then figuring out how to prove it. It’s not hard to imagine a day when scientists are reduced to just interpreting whatever the AIs throw at them…but I don’t think we’re quite there yet.

Instead, I’m thinking about my own work. I’m a particular type of theoretical physicist. I calculate scattering amplitudes, formulas that tell us the probabilities that subatomic particles collide in different ways. We have a way to calculate these, Feynman’s famous diagrams, but they’re inefficient, so researchers like me look for shortcuts.

How do we find those shortcuts? Often, it’s by doing calculations the old, inefficient way. We use older methods, look at the formulas we get, and try to find patterns. Each pattern is a hint at some new principle that can make our calculations easier. Sometimes we can understand the pattern fully, and prove it should hold. Other times, we observe it again and again and tentatively assume it will keep going, and see what happens if it does.

Either way, this isn’t so different from the hermeneutics scientists practice in the story. Feynman diagrams already “know” every pattern we find, like the metahumans in the story who already know every result the human scientists can discover. But that “knowledge” isn’t in a form we can understand or use. We have to learn to interpret it, to read between the lines and find underlying patterns, to end up with something we can hold in our own heads and put into action with our own hands. The truth may be “out there”, but scientists can’t be content with that. We need to get the truth “in here”. We need to interpret it for ourselves.

The Parameter Was Inside You All Along

Sabine Hossenfelder had an explainer video recently on how to tell science from pseudoscience. This is a famously difficult problem, so naturally we have different opinions. I actually think the picture she draws is reasonably sound. But while it is a good criterion to tell whether you yourself are doing pseudoscience, it’s surprisingly tricky to apply it to other people.

Hossenfelder argues that science, at its core, is about explaining observations. To tell whether something is science or pseudoscience you need to ask, first, if it agrees with observations, and second, if it is simpler than those observations. In particular, a scientist should prefer models with fewer parameters. If your model has so many parameters that you can fit any observation, you’re not being scientific.

This is a great rule of thumb, one that as Hossenfelder points out forms the basis of a whole raft of statistical techniques. It does rely on one tricky judgement, though: how many parameters does your model actually have?

Suppose I’m one of those wacky theorists who propose a whole new particle to explain some astronomical mystery. Hossenfelder, being more conservative in these things, proposes a model with no new particles. Neither of our models fit the data perfectly. Perhaps my model fits a little better, but after all it has one extra parameter, from the new particle. If we want to compare our models, we should take that into account, and penalize mine.

Here’s the question, though: how do I know that Hossenfelder didn’t start out with more particles, and got rid of them to get a better fit? If she did, she had more parameters than I did. She just fit them away.

The problem here is closely related to one called the look-elsewhere effect. Scientists don’t publish everything they try. An unscrupulous scientist can do a bunch of different tests until one of them randomly works, and just publish that one, making the result look meaningful when really it was just random chance. Even if no individual scientist is unscrupulous, a community can do the same thing: many scientists testing many different models, until one accidentally appears to work.

As a scientist, you mostly know if your motivations are genuine. You know if you actually tried a bunch of different models or had good reasons from the start to pick the one you did. As someone judging other scientists, you often don’t have that luxury. Sometimes you can look at prior publications and see all the other attempts someone made. Sometimes they’ll even tell you explicitly what parameters they used and how they fit them. But sometimes, someone will swear up and down that their model is just the most natural, principled choice they could have made, and they never considered anything else. When that happens, how do we guard against the look-elsewhere effect?

The normal way to deal with the look-elsewhere effect is to consider, not just whatever tests the scientist claims to have done, but all tests they could reasonably have done. You need to count all the parameters, not just the ones they say they varied.

This works in some fields. If you have an idea of what’s reasonable and what’s not, you have a relatively manageable list of things to look at. You can come up with clear rules for which theories are simpler than others, and people will agree on them.

Physics doesn’t have it so easy. We don’t have any pre-set rules for what kind of model is “reasonable”. If we want to parametrize every “reasonable” model, the best we can do are what are called Effective Field Theories, theories which try to describe every possible type of new physics in terms of its effect on the particles we already know. Even there, though, we need assumptions. The most popular effective field theory, called SMEFT, assumes the forces of the Standard Model keep their known symmetries. You get a different model if you relax that assumption, and even that model isn’t the most general: for example, it still keeps relativity intact. Try to make the most general model possible, and you end up waist-deep in parameter soup.

Subjectivity is a dirty word in science…but as far as I can tell it’s the only way out of this. We can try to count parameters when we can, and use statistical tools…but at the end of the day, we still need to make choices. We need to judge what counts as an extra parameter and what doesn’t, which possible models to compare to and which to ignore. That’s going to be dependent on our scientific culture, on fashion and aesthetics, there just isn’t a way around that. The best we can do is own up to our assumptions, and be ready to change them when we need to.

Bottomless Science

There’s an attitude I keep seeing among physics crackpots. It goes a little something like this:

“Once upon a time, physics had rules. You couldn’t just wave your hands and write down math, you had to explain the world with real physical things.”

What those “real physical things” were varies. Some miss the days when we explained things mechanically, particles like little round spheres clacking against each other. Some want to bring back absolutes: an absolute space, an absolute time, an absolute determinism. Some don’t like the proliferation of new particles, and yearn for the days when everything was just electrons, protons, and neutrons.

In each case, there’s a sense that physicists “cheated”. That, faced with something they couldn’t actually explain, they made up new types of things (fields, relativity, quantum mechanics, antimatter…) instead. That way they could pretend to understand the world, while giving up on their real job, explaining it “the right way”.

I get where this attitude comes from. It does make a certain amount of sense…for other fields.

An an economist, you can propose whatever mathematical models you want, but at the end of the day they have to boil down to actions taken by people. An economist who proposed some sort of “dark money” that snuck into the economy without any human intervention would get laughed at. Similarly, as a biologist or a chemist, you ultimately need a description that makes sense in terms of atoms and molecules. Your description doesn’t actually need to be in terms of atoms and molecules, and often it can’t be: you’re concerned with a different level of explanation. But it should be possible in terms of atoms and molecules, and that puts some constraints on what you can propose.

Why shouldn’t physics have similar constraints?

Suppose you had a mandatory bottom level like this. Maybe everything boils down to ball bearings, for example. What happens when you study the ball bearings?

Your ball bearings have to have some properties: their shape, their size, their weight. Where do those properties come from? What explains them? Who studies them?

Any properties your ball bearings have can be studied, or explained, by physics. That’s physics’s job: to study the fundamental properties of matter. Any “bottom level” is just as fit a subject for physics as anything else, and you can’t explain it using itself. You end up needing another level of explanation.

Maybe you’re objecting here that your favorite ball bearings aren’t up for debate: they’re self-evident, demanded by the laws of mathematics or philosophy.

Here for lack of space, I’ll only say that mathematics and philosophy don’t work that way. Mathematics can tell you whether you’ve described the world consistently, whether the conclusions you draw from your assumptions actually follow. Philosophy can see if you’re asking the right questions, if you really know what you think you know. Both have lessons for modern physics, and you can draw valid criticisms from either. But neither one gives you a single clear way the world must be. Not since the days of Descartes and Kant have people been that naive.

Because of this, physics is doing something a bit different from economics and biology. Each field wants to make models, wants to describe its observations. But in physics, ultimately, those models are all we have. We don’t have a “bottom level”, a backstop where everything has to make sense. That doesn’t mean we can just make stuff up, and whenever possible we understand the world in terms of physics we’ve already discovered. But when we can’t, all bets are off.

Understanding Is Translation

Kernighan’s Law states, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” People sometimes make a similar argument about philosophy of mind: “The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.”

Both points operate on a shared kind of logic. They picture understanding something as modeling it in your mind, with every detail clear. If you’ve already used all your mind’s power to design code, you won’t be able to model when it goes wrong. And modeling your own mind is clearly nonsense, you would need an even larger mind to hold the model.

The trouble is, this isn’t really how understanding works. To understand something, you don’t need to hold a perfect model of it in your head. Instead, you translate it into something you can more easily work with. Like explanations, these translations can be different for different people.

To understand something, I need to know the algorithm behind it. I want to know how to calculate it, the pieces that go in and where they come from. I want to code it up, to test it out on odd cases and see how it behaves, to get a feel for what it can do.

Others need a more physical picture. They need to know where the particles are going, or how energy and momentum are conserved. They want entropy to be increased, action to be minimized, scales to make sense dimensionally.

Others in turn are more mathematical. They want to start with definitions and axioms. To understand something, they want to see it as an example of a broader class of thing, groups or algebras or categories, to fit it into a bigger picture.

Each of these are a kind of translation, turning something into code-speak or physics-speak or math-speak. They don’t require modeling every detail, but when done well they can still explain every detail.

So while yes, it is good practice not to write code that is too “smart”, and too hard to debug…it’s not impossible to debug your smartest code. And while you can’t hold an entire mind inside of yours, you don’t actually need to do that to understand the brain. In both cases, all you need is a translation.

A Scale of “Sure-Thing-Ness” for Experiments

No experiment is a sure thing. No matter what you do, what you test, what you observe, there’s no guarantee that you find something new. Even if you do your experiment correctly and measure what you planned to measure, nature might not tell you anything interesting.

Still, some experiments are more sure than others. Sometimes you’re almost guaranteed to learn something, even if it wasn’t what you hoped, while other times you just end up back where you started.

The first, and surest, type of experiment, is a voyage into the unknown. When nothing is known about your target, no expectations, and no predictions, then as long as you successfully measure anything you’ll have discovered something new. This can happen if the thing you’re measuring was only recently discovered. If you’re the first person who manages to measure the reaction rates of an element, or the habits of an insect, or the atmosphere of a planet, then you’re guaranteed to find out something you didn’t know before.

If you don’t have a total unknown to measure, then you want to test a clear hypothesis. The best of these are the theory killers, experiments which can decisively falsify an idea. History’s most famous experiments take this form, like the measurement of the perihelion of Mercury to test General Relativity or Pasteur’s tests of spontaneous generation. When you have a specific prediction and not much wiggle room, an experiment can teach you quite a lot.

“Not much wiggle room” is key, because these tests can all to easily become theory modifiers instead. If you can tweak your theory enough, then your experiment might not be able to falsify it. Something similar applies when you have a number of closely related theories. Even if you falsify one, you can just switch to another similar idea. In those cases, testing your theory won’t always teach you as much: you have to get lucky and see something that pins your theory down more precisely.

Finally, you can of course be just looking. Some experiments are just keeping an eye out, in the depths of space or the precision of quantum labs, watching for something unexpected. That kind of experiment might never see anything, and never rule anything out, but they can still sometimes be worthwhile.

There’s some fuzziness to these categories, of course. Often when scientists argue about whether an experiment is worth doing they’re arguing about which category to place it in. Would a new collider be a “voyage into the unknown” (new energy scales we’ve never measured before), a theory killer/modifier (supersymmetry! but which one…) or just “just looking”? Is your theory of cosmology specific enough to be “killed”, or merely “modified”? Is your wacky modification of quantum mechanics something that can be tested, or merely “just looked” for?

For any given experiment, it’s worth keeping in mind what you expect, and what would happen if you’re wrong. In science, we can’t do every experiment we want. We have to focus our resources and try to get results. Even if it’s never a sure thing.

The Teaching Heuristic for Non-Empirical Science

Science is by definition empirical. We discover how the world works not by sitting and thinking, but by going out and observing the world. But sometimes, all the observing we can do can’t possibly answer a question. In those situations, we might need “non-empirical science”.

The blog Slate Star Codex had a series of posts on this topic recently. He hangs out with a crowd that supports the many-worlds interpretation of quantum mechanics: the idea that quantum events are not truly random, but instead that all outcomes happen, the universe metaphorically splitting into different possible worlds. These metaphorical universes can’t be observed, so no empirical test can tell the difference between this and other interpretations of quantum mechanics: if we could ever know the difference, it would have to be for “non-empirical” reasons.

What reasons are those? Slate Star Codex teases out a few possible intuitions. He points out that we reject theories that have “unnecessary” ideas. He imagines a world where chemists believe that mixing an acid and a base also causes a distant star to go supernova, and a creationist world where paleontologists believe fossils are placed by the devil. In both cases, there might be no observable difference between their theories and ours, but because their theories have “extra pieces” (the distant star, the devil), we reject them for non-empirical reasons. Slate Star Codex asks if this supports many-worlds: without the extra assumption that quantum events randomly choose one outcome, isn’t quantum mechanics simpler?

I agree with some of this. Science really does use non-empirical reasoning. Without it, there’s no reason not to treat the world as a black box, a series of experiments with no mechanism behind it. But while we do reject theories with unnecessary ideas, that isn’t our only standard. We also need our theories to teach us about the world.

Ultimately, we trust science because it allows us to do things. If we understand the world, we can interact with it: we can build technology, design new experiments, and propose new theories. With this in mind, we can judge scientific theories by how well they help us do these things. A good scientific theory is one that gives us more power to interact with the world. It can do this by making correct predictions, but it can also do this by explaining things, making it easier for us to reason about them. Beyond empiricism, we can judge science by how well it teaches us.

This gives us an objection to the “supernova theory” of Slate Star Codex’s imagined chemists: it’s much more confusing to teach. To teach chemistry in that world you also have to teach the entire life cycle of stars, a subject that students won’t use in any other part of the course. The creationists’ “devil theory” of paleontology has the same problem: if their theory really makes the right predictions they’d have to teach students everything our paleontologists do: every era of geologic history, every theory of dinosaur evolution, plus an extra course in devil psychology. They end up with a mix that only makes it harder to understand the subject.

Many-worlds may seem simpler than other interpretations of quantum mechanics, but that doesn’t make it more useful, or easier to teach. You still need to teach students how to predict the results of experiments, and those results will still be random. If you teach them many-worlds, you need to add more discussion much earlier on, advanced topics like self-localizing uncertainty and decoherence. You need a quite extensive set of ideas, many of which won’t be used again, to justify rules another interpretation could have introduced much more simply. This would be fine if those ideas made additional predictions, but they don’t: like every interpretation of quantum mechanics, you end up doing the same experiments and building the same technology in the end.

I’m not saying I know many-worlds is false, or that I know another interpretation is true. All I’m saying is that, when physicists criticize many-worlds, they’re not just blindly insisting on empiricism. They’re rejecting many-worlds, in part, because all it does is make their work harder. And that, more than elegance or simplicity, is how we judge theories.

QCD and Reductionism: Stranger Than You’d Think

Earlier this year, I made a list of topics I wanted to understand. The most abstract and technical of them was something called “Wilsonian effective field theory”. I still don’t understand Wilsonian effective field theory. But while thinking about it, I noticed something that seemed weird. It’s something I think many physicists already understand, but that hasn’t really sunk in with the public yet.

There’s an old problem in particle physics, described in many different ways over the years. Take our theories and try to calculate some reasonable number (say, the angle an electron turns in a magnetic field), and instead of that reasonable number we get infinity. We fix this problem with a process called renormalization that hides that infinity away, changing the “normalization” of some constant like a mass or a charge. While renormalization first seemed like a shady trick, physicists eventually understood it better. First, we thought of it as a way to work around our ignorance, that the true final theory would have no infinities at all. Later, physicists instead thought about renormalization in terms of scaling.

Imagine looking at the world on a camera screen. You can zoom in, or zoom out. The further you zoom out, the more details you’ll miss: they’re just too small to be visible on your screen. You can guess what they might be, but your picture will be different depending on how you zoom.

In particle physics, many of our theories are like that camera. They come with a choice of “zoom setting”, a minimum scale where they still effectively tell the whole story. We call theories like these effective field theories. Some physicists argue that these are all we can ever have: since our experiments are never perfect, there will always be a scale so small we have no evidence about it.

In general, theories can be quite different at different scales. Some theories, though, are especially nice: they look almost the same as we zoom in to smaller scales. The only things that change are the mass of different particles, and their charges.

Trippy

One theory like this is Quantum Chromodynamics (or QCD), the theory of quarks and gluons. Zoom in, and the theory looks pretty much the same, with one crucial change: the force between particles get weaker. There’s a number, called the “coupling constant“, that describes how strong a force is, think of it as sort of like an electric charge. As you zoom in to quarks and gluons, you find you can still describe them with QCD, just with a smaller coupling constant. If you could zoom “all the way in”, the constant (and thus the force between particles) would be zero.

This makes QCD a rare kind of theory: one that could be complete to any scale. No matter how far you zoom in, QCD still “makes sense”. It never gives contradictions or nonsense results. That doesn’t mean it’s actually true: it interacts with other forces, like gravity, that don’t have complete theories, so it probably isn’t complete either. But if we didn’t have gravity or electricity or magnetism, if all we had were quarks and gluons, then QCD could have been the final theory that described them.

And this starts feeling a little weird, when you think about reductionism.

Philosophers define reductionism in many different ways. I won’t be that sophisticated. Instead, I’ll suggest the following naive definition: Reductionism is the claim that theories on larger scales reduce to theories on smaller scales.

Here “reduce to” is intentionally a bit vague. It might mean “are caused by” or “can be derived from” or “are explained by”. I’m gesturing at the sort of thing people mean when they say that biology reduces to chemistry, or chemistry to physics.

What happens when we think about QCD, with this intuition?

QCD on larger scales does indeed reduce to QCD on smaller scales. If you want to ask why QCD on some scale has some coupling constant, you can explain it by looking at the (smaller) QCD coupling constant on a smaller scale. If you have equations for QCD on a smaller scale, you can derive the right equations for a larger scale. In some sense, everything you observe in your larger-scale theory of QCD is caused by what happens in your smaller-scale theory of QCD.

But this isn’t quite the reductionism you’re used to. When we say biology reduces to chemistry, or chemistry reduces to physics, we’re thinking of just a few layers: one specific theory reduces to another specific theory. Here, we have an infinite number of layers, every point on the scale from large to small, each one explained by the next.

Maybe you think you can get out of this, by saying that everything should reduce to the smallest scale. But remember, the smaller the scale the smaller our “coupling constant”, and the weaker the forces between particles. At “the smallest scale”, the coupling constant is zero, and there is no force. It’s only when you put your hand on the zoom nob and start turning that the force starts to exist.

It’s reductionism, perhaps, but not as we know it.

Now that I understand this a bit better, I get some of the objections to my post about naturalness a while back. I was being too naive about this kind of thing, as some of the commenters (particularly Jacques Distler) noted. I believe there’s a way to rephrase the argument so that it still works, but I’d have to think harder about how.

I also get why I was uneasy about Sabine Hossenfelder’s FQXi essay on reductionism. She considered a more complicated case, where the chain from large to small scale could be broken, a more elaborate variant of a problem in Quantum Electrodynamics. But if I’m right here, then it’s not clear that scaling in effective field theories is even the right way to think about this. When you have an infinite series of theories that reduce to other theories, you’re pretty far removed from what most people mean by reductionism.

Finally, this is the clearest reason I can find why you can’t do science without an observer. The “zoom” is just a choice we scientists make, an arbitrary scale describing our ignorance. But without it, there’s no way to describe QCD. The notion of scale is an inherent and inextricable part of the theory, and it doesn’t have to mean our theory is incomplete.

Experts, please chime in here if I’m wrong on the physics here. As I mentioned at the beginning, I still don’t think I understand Wilsonian effective field theory. If I’m right though, this seems genuinely weird, and something more of the public should appreciate.

Facts About Our Capabilities Are Facts About the World

A paper leaked from Google last week claimed that their researchers had achieved “quantum supremacy”, the milestone at which a quantum computer performs a calculation faster than any existing classical computer. Scott Aaronson has a great explainer about this. The upshot is that Google’s computer is much too small to crack all our encryptions (only 53 qubits, the equivalent of bits for quantum computers), but it still appears to be a genuine quantum computer doing a genuine quantum computation that is genuinely not feasible otherwise.

How impressed should we be about this?

On one hand, the practical benefits of a 53-qubit computer are pretty minimal. Scott discusses some applications: you can generate random numbers, distributed in a way that will let others verify that they are truly random, the kind of thing it’s occasionally handy to do in cryptography. Still, by itself this won’t change the world, and compared to the quantum computing hype I can understand if people find this underwhelming.

On the other hand, as Scott says, this falsifies the Extended Church-Turing Thesis! And that sounds pretty impressive, right?

Ok, I’m actually just re-phrasing what I said before. The Extended Church-Turing Thesis proposes that a classical computer (more specifically, a probabilistic Turing machine) can efficiently simulate any reasonable computation. Falsifying it means finding something that a classical computer cannot compute efficiently but another sort of computer (say, a quantum computer) can. If the calculation Google did truly can’t be done efficiently on a classical computer (this is not proven, though experts seem to expect it to be true) then yes, that’s what Google claims to have done.

So we get back to the real question: should we be impressed by quantum supremacy?

Well, should we have been impressed by the Higgs?

The detection of the Higgs boson in 2012 hasn’t led to any new Higgs-based technology. No-one expected it to. It did teach us something about the world: that the Higgs boson exists, and that it has a particular mass. I think most people accept that that’s important: that it’s worth knowing how the world works on a fundamental level.

Google may have detected the first-known violation of the Extended Church-Turing Thesis. This could eventually lead to some revolutionary technology. For now, though, it hasn’t. Instead, it teaches us something about the world.

It may not seem like it, at first. Unlike the Higgs boson, “Extended Church-Turing is false” isn’t a law of physics. Instead, it’s a fact about our capabilities. It’s a statement about the kinds of computers we can and cannot build, about the kinds of algorithms we can and cannot implement, the calculations we can and cannot do.

Facts about our capabilities are still facts about the world. They’re still worth knowing, for the same reasons that facts about the world are still worth knowing. They still give us a clearer picture of how the world works, which tells us in turn what we can and cannot do. According to the leaked paper, Google has taught us a new fact about the world, a deep fact about our capabilities. If that’s true we should be impressed, even without new technology.