Tag Archives: quantum field theory

The Quantum Paths Not Traveled

Before this week’s post: a former colleague of mine from CEA Paris-Saclay, Sylvain Ribault, posted a dialogue last week presenting different perspectives on academic publishing. One of the highlights of my brief time at the CEA were the times I got to chat with Sylvain and others about the future forms academia might take. He showed me a draft of his dialogue a while ago, designed as a way to introduce newcomers to the debate about how, and whether, academics should do peer review. I’ve got a different topic this week so I won’t say much more about it, but I encourage you to take a look!


Matt Strassler has a nice post up about waves and particles. He’s writing to address a common confusion, between two concepts that sound very similar. On the other hand, there are the waves of quantum field theory, ripples in fundamental fields the smallest versions of which correspond to particles. (Strassler likes to call them “wavicles”, to emphasize their wavy role.) On the other hand, there are the wavefunctions of quantum mechanics, descriptions of the behavior of one or more interacting particles over time. To distinguish, he points out that wavicles can hurt you, while wavefunctions cannot. Wavicles are the things that collide and light up detectors, one by one, wavefunctions are the math that describes when and how that happens. Many types of wavicles can run into each other one by one, but their interactions can all be described together by a single wavefunction. It’s an important point, well stated.

(I do think he goes a bit too far in saying that the wavefunction is not “an object”, though. That smacks of metaphysics, and I think that’s not worth dabbling in for physicists.)

After reading his post, there’s something that might still confuse you. You’ve probably heard that in quantum mechanics, an electron is both a wave and a particle. Does the “wave” in that saying mean “wavicle”, or “wavefunction”?

A “wave” built out of particles

The gif above shows data from a double-slit experiment, an important type of experiment from the early days of quantum mechanics. These experiments were first conducted before quantum field theory (and thus, before the ideas that Strassler summarizes with “wavicles”). In a double-slit experiment, particles are shot at a screen through two slits. The particles that hit the screen can travel through one slit or the other.

A double-slit experiment, in diagram form

Classically, you would expect particles shot randomly at the screen to form two piles on the other side, one in front of each slit. Instead, they bunch up into a rippling pattern, the same sort of pattern that was used a century earlier to argue that light was a wave. The peaks and troughs of the wave pass through both slits, and either line up or cancel out, leaving the distinctive pattern.

When it was discovered that electrons do this too, it led to the idea that electrons must be waves as well, despite also being particles. That insight led to the concept of the wavefunction. So the “wave” in the saying refers to wavefunctions.

But electrons can hurt you, and as Strassler points out, wavefunctions cannot. So how can the electron be a wavefunction?

To risk a bit of metaphysics myself, I’ll just say: it can’t. An electron can’t “be” a wavefunction.

The saying, that electrons are both particles and waves, is from the early days of quantum mechanics, when people were confused about what it all meant. We’re still confused, but we have some better ways to talk about it.

As a start, it’s worth noticing that, whenever you measure an electron, it’s a particle. Each electron that goes through the slits hits your screen as a particle, a single dot. If you see many electrons at once, you may get the feeling that they look like waves. But every actual electron you measure, every time you’re precise enough to notice, looks like a particle. And for each individual electron, you can extrapolate back the path it took, exactly as if it traveled like a particle the whole way through.

The same is true, though, of light! When you see light, photons enter your eyes, and each one that you see triggers a chemical change in a molecule called a photopigment. The same sort of thing happens for photographs, while an electrical signal gets triggered instead in a digital camera. Light may behave like a wave in some sense, but every time you actually observe it it looks like a particle.

But while you can model each individual electron, or photon, as a classical particle, you can’t model the distribution of multiple electrons that way.

That’s because in quantum mechanics, the “paths not taken” matter. A single electron will only go through one slit in the double-slit experiment. But the fact that it could have gone through both slits matters, and changes the chance that it goes through each particular path. The possible paths in the wavefunction interfere with each other, the same way different parts of classical waves do.

That role of the paths not taken, of the “what if”, is the heart and soul of quantum mechanics. No matter how you interpret its mysteries, “what if” matters. If you believe in a quantum multiverse, you think every “what if” happens somewhere in that infinity of worlds. If you think all that matters is observations, then “what if” shows the folly of modeling the world as anything else. If you are tempted to try to mend quantum mechanics with faster-than-light signals, then you have to declare one “what if” the true one. And if you want to double-down on determinism and replace quantum mechanics, you need to declare that certain “what if” questions are off-limits.

“What if matters” isn’t the same as a particle traveling every path at once, it’s its own weird thing with its own specific weird consequences. It’s a metaphor, because everything written in words is a metaphor. But it’s a better metaphor than thinking an electron is both a particle and a wave.

The Hidden Higgs

Peter Higgs, the theoretical physicist whose name graces the Higgs boson, died this week.

Peter Higgs, after the Higgs boson discovery was confirmed

This post isn’t an obituary: you can find plenty of those online, and I don’t have anything special to say that others haven’t. Reading the obituaries, you’ll notice they summarize Higgs’s contribution in different ways. Higgs was one of the people who proposed what today is known as the Higgs mechanism, the principle by which most (perhaps all) elementary particles gain their mass. He wasn’t the only one: Robert Brout and François Englert proposed essentially the same idea in a paper that was published two months earlier, in August 1964. Two other teams came up with the idea slightly later than that: Gerald Guralnik, Carl Richard Hagen, and Tom Kibble were published one month after Higgs, while Alexander Migdal and Alexander Polyakov found the idea independently in 1965 but couldn’t get it published till 1966.

Higgs did, however, do something that Brout and Englert didn’t. His paper doesn’t just propose a mechanism, involving a field which gives particles mass. It also proposes a particle one could discover as a result. Read the more detailed obituaries, and you’ll discover that this particle was not in the original paper: Higgs’s paper was rejected at first, and he added the discussion of the particle to make it more interesting.

At this point, I bet some of you are wondering what the big deal was. You’ve heard me say that particles are ripples in quantum fields. So shouldn’t we expect every field to have a particle?

Tell that to the other three Higgs bosons.

Electromagnetism has one type of charge, with two signs: plus, and minus. There are electrons, with negative charge, and their anti-particles, positrons, with positive charge.

Quarks have three types of charge, called colors: red, green, and blue. Each of these also has two “signs”: red and anti-red, green and anti-green, and blue and anti-blue. So for each type of quark (like an up quark), there are six different versions: red, green, and blue, and anti-quarks with anti-red, anti-green, and anti-blue.

Diagram of the colors of quarks

When we talk about quarks, we say that the force under which they are charged, the strong nuclear force, is an “SU(3)” force. The “S” and “U” there are shorthand for mathematical properties that are a bit too complicated to explain here, but the “(3)” is quite simple: it means there are three colors.

The Higgs boson’s primary role is to make the weak nuclear force weak, by making the particles that carry it from place to place massive. (That way, it takes too much energy for them to go anywhere, a feeling I think we can all relate to.) The weak nuclear force is an “SU(2)” force. So there should be two “colors” of particles that interact with the weak nuclear force…which includes Higgs bosons. For each, there should also be an anti-color, just like the quarks had anti-red, anti-green, and anti-blue. So we need two “colors” of Higgs bosons, and two “anti-colors”, for a total of four!

But the Higgs boson discovered at the LHC was a neutral particle. It didn’t have any electric charge, or any color. There was only one, not four. So what happened to the other three Higgs bosons?

The real answer is subtle, one of those physics things that’s tricky to concisely explain. But a partial answer is that they’re indistinguishable from the W and Z bosons.

Normally, the fundamental forces have transverse waves, with two polarizations. Light can wiggle along its path back and forth, or up and down, but it can’t wiggle forward and backward. A fundamental force with massive particles is different, because they can have longitudinal waves: they have an extra direction in which they can wiggle. There are two W bosons (plus and minus) and one Z boson, and they all get one more polarization when they become massive due to the Higgs.

That’s three new ways the W and Z bosons can wiggle. That’s the same number as the number of Higgs bosons that went away, and that’s no coincidence. We physicist like to say that the W and Z bosons “ate” the extra Higgs, which is evocative but may sound mysterious. Instead, you can think of it as the two wiggles being secretly the same, mixing together in a way that makes them impossible to tell apart.

The “count”, of how many wiggles exist, stays the same. You start with four Higgs wiggles, and two wiggles each for the precursors of the W+, W-, and Z bosons, giving ten. You end up with one Higgs wiggle, and three wiggles each for the W+, W-, and Z bosons, which still adds up to ten. But which fields match with which wiggles, and thus which particles we can detect, changes. It takes some thought to look at the whole system and figure out, for each field, what kind of particle you might find.

Higgs did that work. And now, we call it the Higgs boson.

Book Review: The Case Against Reality

Nima Arkani-Hamed shows up surprisingly rarely in popular science books. A major figure in my former field, Nima is extremely quotable (frequent examples include “spacetime is doomed” and “the universe is not a crappy metal”), but those quotes don’t seem to quite have reached the popular physics mainstream. He’s been interviewed in books by physicists, and has a major role in one popular physics book that I’m aware of. From this scattering of mentions, I was quite surprised to hear of another book where he makes an appearance: not a popular physics book at all, but a popular psychology book: Donald Hoffman’s The Case Against Reality. Naturally, this meant I had to read it.

Then, I saw the first quote on the back cover…or specifically, who was quoted.

Seeing that, I settled in for a frustrating read.

A few pages later, I realized that this, despite his endorsement, is not a Deepak Chopra kind of book. Hoffman is careful in some valuable ways. Specifically, he has a philosopher’s care, bringing up objections and potential holes in his arguments. As a result, the book wasn’t frustrating in the way I expected.

It was even more frustrating, actually. But in an entirely different way.

When a science professor writes a popular book, the result is often a kind of ungainly Frankenstein. The arguments we want to make tend to be better-suited to shorter pieces, like academic papers, editorials, and blog posts. To make these into a book, we have to pad them out. We stir together all the vaguely related work we’ve done, plus all the best-known examples from other peoples’ work, trying (often not all that hard) to make the whole sound like a cohesive story. Read enough examples, and you start to see the joints between the parts.

Hoffman is ostensibly trying to tell a single story. His argument is that the reality we observe, of objects in space and time, is not the true reality. It is a convenient reality, one that has led to our survival, but evolution has not (and as he argues, cannot) let us perceive the truth. Instead, he argues that the true reality is consciousness: a world made up of conscious beings interacting with each other, with space, time, and all the rest emerging as properties of those interactions.

That certainly sounds like it could be one, cohesive argument. In practice, though, it is three, and they don’t fit together as well as he’d hope.

Hoffman is trained as a psychologist. As such, one of the arguments is psychological: that research shows that we mis-perceive the world in service of evolutionary fitness.

Hoffman is a cognitive scientist, and while many cognitive scientists are trained as psychologists, others are trained as philosophers. As such, one of his arguments is philosophical: that the contents of consciousness can never be explained by relations between material objects, and that evolution, and even science, systematically lead us astray.

Finally, Hoffman has evidently been listening to and reading the work of some physicists, like Nima and Carlo Rovelli. As such, one of his arguments is physical: that physicists believe that space and time are illusions and that consciousness may be fundamental, and that the conclusions of the book lead to his own model of the basic physical constituents of the world.

The book alternates between these three arguments, so rather than in chapter order, I thought it would be better to discuss each argument in its own section.

The Psychological Argument

Sometimes, when two academics get into a debate, they disagree about what’s true. Two scientists might argue about whether an experiment was genuine, whether the statistics back up a conclusion, or whether a speculative theory is actually consistent. These are valuable debates, and worth reading about if you want to learn something about the nature of reality.

Sometimes, though, two debating academics agree on what’s true, and just disagree on what’s important. These debates are, at best, relevant to other academics and funders. They are not generally worth reading for anybody else, and are often extremely petty and dumb.

Hoffman’s psychological argument, regrettably, is of the latter kind. He would like to claim it’s the former, and to do so he marshals a host of quotes from respected scientists that claim that human perception is veridical: that what we perceive is real, courtesy of an evolutionary process that would have killed us off if it wasn’t. From that perspective, every psychological example Hoffman gives is a piece of counter-evidence, a situation where evolution doesn’t just fail to show us the true nature of reality, but actively hides reality from us.

The problem is that, if you actually read the people Hoffman quotes, they’re clearly not making the extreme point he claims. These people are psychologists, and all they are arguing is that perception is veridical in a particular, limited way. They argue that we humans are good at estimating distances or positions of objects, or that we can see a wide range of colors. They aren’t making some sort of philosophical point about those distances or positions or colors being how the world “really is”, nor are they claiming that evolution never makes humans mis-perceive.

Instead, they, and thus Hoffman, are arguing about importance. When studying humans, is it more useful to think of us as perceiving the world as it is? Or is it more useful to think of evolution as tricking us? Which happens more often?

The answers to each of those questions have to be “it depends”. Neither answer can be right all the time. At most then, this kind of argument can convince one academic to switch from researching in one way to researching in another, by saying that right now one approach is a better strategy. It can’t tell us anything more.

If the argument Hoffman is trying to get across here doesn’t matter, are there other reasons to read this part?

Popular psychology books tend to re-use a few common examples. There are some good ones, so if you haven’t read such a book you probably should read a couple, just to hear about them. For example, Hoffman tells the story of the split-brain patients, which is definitely worth being aware of.

(Those of you who’ve heard that story may be wondering how the heck Hoffman squares it with his idea of consciousness as fundamental. He actually does have a (weird) way to handle this, so read on.)

The other examples come from Hoffman’s research, and other research in his sub-field. There are stories about what optical illusions tell us about our perception, about how evolution primes us to see different things as attractive, and about how advertisers can work with attention.

These stories would at least be a source of a few more cool facts, but I’m a bit wary. The elephant in the room here is the replication crisis. Paper after paper in psychology has turned out to be a statistical mirage, accidental successes that fail to replicate in later experiments. This can happen without any deceit on the part of the psychologist, it’s just a feature of how statistics are typically done in the field.

Some psychologists make a big deal about the replication crisis: they talk about the statistical methods they use, and what they do to make sure they’re getting a real result. Hoffman talks a bit about tricks to rule out other explanations, but mostly doesn’t focus on this kind of thing.. This doesn’t mean he’s doing anything wrong: it might just be it’s off-topic. But it makes it a bit harder to trust him, compared to other psychologists who do make a big deal about it.

The Philosophical Argument

Hoffman structures his book around two philosophical arguments, one that appears near the beginning and another that, as he presents it, is the core thesis of the book. He calls both of these arguments theorems, a naming choice sure to irritate mathematicians and philosophers alike, but the mathematical content in either is for the most part not the point: in each case, the philosophical setup is where the arguments get most of their strength.

The first of these arguments, called The Scrambling Theorem, is set up largely as background material: not his core argument, but just an entry into the overall point he’s making. I found it helpful as a way to get at his reasoning style, the sorts of things he cares about philosophically and the ones he doesn’t.

The Scrambling Theorem is meant to weigh in on the debate over a thought experiment called the Inverted Spectrum, which in turn weighs on the philosophical concept of qualia. The Inverted Spectrum asks us to imagine someone who sees the spectrum of light inverted compared to how we see it, so that green becomes red and red becomes green, without anything different about their body or brain. Such a person would learn to refer to colors the same ways that we do, still referring to red blood even though they see what we see when we see green grass. Philosophers argue that, because we can imagine this, the “qualia” we see in color, like red or green, are distinct from their practical role: they are images in the mind’s eye that can be compared across minds, but do not correspond to anything we have yet characterized scientifically in the physical world.

As a response, other philosophers argued that you can’t actually invert the spectrum. Colors aren’t really a wheel, we can distinguish, for example, more colors between red and blue than between green and yellow. Just flipping colors around would have detectable differences that would have to have physical implications, you can’t just swap qualia and nothing else.

The Scrambling Theorem is in response to this argument. Hoffman argues that, while you can’t invert the spectrum, you can scramble it. By swapping not only the colors, but the relations between them, you can arrange any arbitrary set of colors however else you’d like. You can declare that green not only corresponds to blood and not grass, but that it has more colors between it and yellow, perhaps by stealing them from the other side of the color wheel. If you’re already allowed to swap colors and their associations around, surely you can do this too, and change order and distances between them.

Believe it or not, I think Hoffman’s argument is correct, at least in its original purpose. You can’t respond to the Inverted Spectrum just by saying that colors are distributed differently on different sides of the color wheel. If you want to argue against the Inverted Spectrum, you need a better argument.

Hoffman’s work happens to suggest that better argument. Because he frames this argument in the language of mathematics, as a “theorem”, Hoffman’s argument is much more general than the summary I gave above. He is arguing that not merely can you scramble colors, but anything you like. If you want to swap electrons and photons, you can: just make your photons interact with everything the way electrons did, and vice versa. As long as you agree that the things you are swapping exist, according to Hoffman, you are free to exchange them and their properties any way you’d like.

This is because, to Hoffman, things that “actually exist” cannot be defined just in terms of their relations. An electron is not merely a thing that repels other electrons and is attracted to protons and so on, it is a thing that “actually exists” out there in the world. (Or, as he will argue, it isn’t really. But that’s because in the end he doesn’t think electrons exist.)

(I’m tempted to argue against this with a mathematical object like group elements. Surely the identity element of a group is defined by its relations? But I think he would argue identity elements of groups don’t actually exist.)

In the end, Hoffman is coming from a particular philosophical perspective, one common in modern philosophers of metaphysics, the study of the nature of reality. From this perspective, certain things exist, and are themselves by necessity. We cannot ask what if a thing were not itself. For example, in this perspective it is nonsense to ask what if Superman was not Clark Kent, because the two names refer to the same actually existing person.

(If, you know, Superman actually existed.)

Despite the name of the book, Hoffman is not actually making a case against reality in general. He very much seems to believe in this type of reality, in the idea that there are certain things out there that are real, independent of any purely mathematical definition of their properties. He thinks they are different things than you think they are, but he definitely thinks there are some such things, and that it’s important and scientifically useful to find them.

Hoffman’s second argument is, as he presents it, the core of the book. It’s the argument that’s supposed to show that the world is almost certainly not how we perceive it, even through scientific instruments and the scientific method. Once again, he calls it a theorem: the Fitness Beats Truth theorem.

The Fitness Beats Truth argument begins with a question: why should we believe what we see? Why do we expect that the things we perceive should be true?

In Hoffman’s mind, the only answer is evolution. If we perceived the world inaccurately, we would die out, replaced by creatures that perceived the world better than we did. You might think we also have evidence from biology, chemistry, and physics: we can examine our eyes, test them against cameras, see how they work and what they can and can’t do. But to Hoffman, all of this evidence may be mistaken, because to learn biology, chemistry, and physics we must first trust that we perceive the world correctly to begin with. Evolution, though, doesn’t rely on any of that. Even if we aren’t really bundles of cells replicating through DNA and RNA, we should still expect something like evolution, some process by which things differ, are selected, and reproduce their traits differently in the next generation. Such things are common enough, and general enough, that one can (handwavily) expect them through pure reason alone.

But, says Hoffman’s psychology experience, evolution tricks us! We do mis-perceive, and systematically, in ways that favor our fitness over reality. And so Hoffman asks, how often should we expect this to happen?

The Fitness Beats Truth argument thinks of fitness as randomly distributed: some parts of reality historically made us more fit, some less. This distribution could match reality exactly, so that for any two things that are actually different, they will make us fit in different ways. But it doesn’t have to. There might easily be things that are really very different from each other, but which are close enough from a fitness perspective that to us they seem exactly the same.

The “theorem” part of the argument is an attempt to quantify this. Hoffman imagines a pixelated world, and asks how likely it is that a random distribution of fitness matches a random distribution of pixels. This gets extremely unlikely for a world of any reasonable size, for pretty obvious reasons. Thus, Hoffman concludes: in a world with evolution, we should almost always expect it to hide something from us. The world, if it has any complexity at all, has an almost negligible probability of being as we perceive it.

On one level, this is all kind of obvious. Evolution does trick us sometimes, just as it tricks other animals. But Hoffman is trying to push this quite far, to say that ultimately our whole picture of reality, not just our eyes and ears and nose but everything we see with microscopes and telescopes and calorimeters and scintillators, all of that might be utterly dramatically wrong. Indeed, we should expect it to be.

In this house, we tend to dismiss the Cartesian Demon. If you have an argument that makes you doubt literally everything, then it seems very unlikely you’ll get anything useful from it. Unlike Descartes’s Demon, Hoffman thinks we won’t be tricked forever. The tricks evolution plays on us mattered in our ancestral environment, but over time we move to stranger and stranger situations. Eventually, our fitness will depend on something new, and we’ll need to learn something new about reality.

This means that ultimately, despite the skeptical cast, Hoffman’s argument fits with the way science already works. We are, very much, trying to put ourselves in new situations and test whether our evolved expectations still serve us well or whether we need to perceive things anew. That is precisely what we in science are always doing, every day. And as we’ll see in the next section, whatever new things we have to learn have no particular reason to be what Hoffman thinks they should be.

But while it doesn’t really matter, I do still want to make one counter-argument to Fitness Beats Truth. Hoffman considers a random distribution of fitness, and asks what the chance is that it matches truth. But fitness isn’t independent of truth, and we know that not just from our perception, but from deeper truths of physics and mathematics. Fitness is correlated with truth, fitness often matches truth, for one key reason: complex things are harder than simple things.

Imagine a creature evolving an eye. They have a reason, based on fitness, to need to know where their prey is moving. If evolution was a magic wand, and chemistry trivial, it would let them see their prey, and nothing else. But evolution is not magic, and chemistry is not trivial. The easiest thing for this creature to see is patches of light and darkness. There are many molecules that detect light, because light is a basic part of the physical world. To detect just prey, you need something much more complicated, molecules and cells and neurons. Fitness imposes a cost, and it means that the first eyes that evolve are spots, detecting just light and darkness.

Hoffman asks us not to assume that we know how eyes work, that we know how chemistry works, because we got that knowledge from our perceptions. But the nature of complexity and simplicity, entropy and thermodynamics and information, these are things we can approach through pure thought, as much as evolution. And those principles tell us that it will always be easier for an organism to perceive the world as it truly is than not, because the world is most likely simple and it is most likely the simplest path to perceive it directly. When benefits get high enough, when fitness gets strong enough, we can of course perceive the wrong thing. But if there is only a small fitness benefit to perceiving something incorrectly, then simplicity will win out. And by asking simpler and simpler questions, we can make real durable scientific progress towards truth.

The Physical Argument

So if I’m not impressed by the psychology or the philosophy, what about the part that motivated me to read the book in the first place, the physics?

Because this is, in a weird and perhaps crackpot way, a physics book. Hoffman has a specific idea, more specific than just that the world we perceive is an evolutionary illusion, more specific than that consciousness cannot be explained by the relations between physical particles. He has a proposal, based on these ideas, one that he thinks might lead to a revolutionary new theory of physics. And he tries to argue that physicists, in their own way, have been inching closer and closer to his proposal’s core ideas.

Hoffman’s idea is that the world is made, not of particles or fields or anything like that, but of conscious agents. You and I are, in this picture, certainly conscious agents, but so are the sources of everything we perceive. When we reach out and feel a table, when we look up and see the Sun, those are the actions of some conscious agent intruding on our perceptions. Unlike panpsychists, who believe that everything in the world is conscious, Hoffman doesn’t believe that the Sun itself is conscious, or is made of conscious things. Rather, he thinks that the Sun is an evolutionary illusion that rearranges our perceptions in a convenient way. The perceptions still come from some conscious thing or set of conscious things, but unlike in panpsychism they don’t live in the center of our solar system, or in any other place (space and time also being evolutionary illusions in this picture). Instead, they could come from something radically different that we haven’t imagined yet.

Earlier, I mentioned split brain patients. For anyone who thinks of conscious beings as fundamental, split brain patients are a challenge. These are people who, as a treatment for epilepsy, had the bridge between the two halves of their brain severed. The result is eerily as if their consciousness was split in two. While they only express one train of thought, that train of thought seems to only correspond to the thoughts of one side of their brain, controlling only half their body. The other side, controlling the other half of their body, appears to have different thoughts, different perceptions, and even different opinions, which are made manifest when instead of speaking they use that side of their body to gesture and communicate. While some argue that these cases are over-interpreted and don’t really show what they’re claimed to, Hoffman doesn’t. He accepts that these split-brain patients genuinely have their consciousness split in two.

Hoffman thinks this isn’t a problem because for him, conscious agents can be made up of other conscious agents. Each of us is conscious, but we are also supposed to be made up of simpler conscious agents. Our perceptions and decisions are not inexplicable, but can be explained in terms of the interactions of the simpler conscious entities that make us up, each one communicating with the others.

Hoffman speculates that everything is ultimately composed of the simplest possible conscious agents. For him, a conscious agent must do two things: perceive, and act. So the simplest possible agent perceives and acts in the simplest possible way. They perceive a single bit of information: 0 or 1, true or false, yes or no. And they take one action, communicating a different bit of information to another conscious agent: again, 0 or 1, true or false, yes or no.

Hoffman thinks that this could be the key to a new theory of physics. Instead of thinking about the world as composed of particles and fields, think about it as composed of these simple conscious agents, each one perceiving and communicating one bit at a time.

Hoffman thinks this, in part, because he sees physics as already going in this direction. He’s heard that “spacetime is doomed”, he’s heard that quantum mechanics is contextual and has no local realism, he’s heard that quantum gravity researchers think the world might be a hologram and space-time has a finite number of bits. This all “rhymes” enough with his proposal that he’s confident physics has his back.

Hoffman is trained in psychology. He seems to know his philosophy, at least enough to engage with the literature there. But he is absolutely not a physicist, and it shows. Time and again it seems like he relies on “pop physics” accounts that superficially match his ideas without really understanding what the physicists are actually talking about.

He keeps up best when it comes to interpretations of quantum mechanics, a field where concepts from philosophy play a meaningful role. He covers the reasons why quantum mechanics keeps philosophers up at night: Bell’s Theorem, which shows that a theory that matches the predictions of quantum mechanics cannot both be “realist”, with measurements uncovering pre-existing facts about the world, and “local”, with things only influencing each other at less than the speed of light, the broader notion of contextuality, where measured results are dependent on which other measurements are made, and the various experiments showing that both of these properties hold in the real world.

These two facts, and their implications, have spawned a whole industry of interpretations of quantum mechanics, where physicists and philosophers decide which side of various dilemmas to take and how to describe the results. Hoffman quotes a few different “non-realist” interpretations: Carlo Rovelli’s Relational Quantum Mechanics, Quantum Bayesianism/QBism, Consistent Histories, and whatever Chris Fields is into. These are all different from one another, which Hoffman is aware of. He just wants to make the case that non-realist interpretations are reasonable, that the physicists collectively are saying “maybe reality doesn’t exist” just like he is.

The problem is that Hoffman’s proposal is not, in the quantum mechanics sense, non-realist. Yes, Hoffman thinks that the things we observe are just an “interface”, that reality is really a network of conscious agents. But in order to have a non-realist interpretation, you need to also have other conscious agents not be real. That’s easily seen from the old “Wigner’s friend” thought experiment, where you put one of your friends in a Schrodinger’s cat-style box. Just as Schrodinger’s cat can be both alive and dead, your friend can both have observed something and not have observed it, or observed something and observed something else. The state of your friend’s mind, just like everything else in a non-realist interpretation, doesn’t have a definite value until you measure it.

Hoffman’s setup doesn’t, and can’t, work that way. His whole philosophical project is to declare that certain things exist and others don’t: the sun doesn’t exist, conscious agents do. In a non-realist interpretation, the sun and other conscious agents can both be useful descriptions, but ultimately nothing “really exists”. Science isn’t a catalogue of what does or doesn’t “really exist”, it’s a tool to make predictions about your observations.

Hoffman gets even more confused when he gets to quantum gravity. He starts out with a common misconception: that the Planck length represents the “pixels” of reality, sort of like the pixels of your computer screen, which he uses to support his “interface” theory of consciousness. This isn’t really the right way to think about it the Planck length, though, and certainly isn’t what the people he’s quoting have in mind. The Planck length is a minimum scale in that space and time stop making sense as one approaches it, but that’s not necessarily because space and time are made up of discrete pixels. Rather, it’s because as you get closer to the Planck length, space and time stop being the most convenient way to describe things. For a relatively simple example of how this can work, see my post here.

From there, he reflects on holography: the discovery that certain theories in physics can be described equally well by what is happening on their boundary as by their interior, the way that a 2D page can hold all the information for an apparently 3D hologram. He talks about the Bekenstein bound, the conjecture that there is a maximum amount of information needed to describe a region of space, proportional not to the volume of the region but to its area. For Hoffman, this feels suspiciously like human vision: if we see just a 2D image of the world, could that image contain all the information needed to construct that world? Could the world really be just what we see?

In a word, no.

On the physics side, the Bekenstein bound is a conjecture, and one that doesn’t always hold. A more precise version that seems to hold more broadly, called the Bousso bound, works by demanding the surface have certain very specific geometric properties in space-time, properties not generally shared by the retinas of our eyes.

But it even fails in Hoffman’s own context, once we remember that there are other types of perception than vision. When we hear, we don’t detect a 2D map, but a 1D set of frequencies, put in “stereo” by our ears. When we feel pain, we can feel it in any part of our body, essentially a 3D picture since it goes inwards as well. Nothing about human perception uniquely singles out a 2D surface.

There is actually something in physics much closer to what Hoffman is imagining, but it trades on a principle Hoffman aspires to get rid of: locality. We’ve known since Einstein that you can’t change the world around you faster than the speed of light. Quantum mechanics doesn’t change that, despite what you may have heard. More than that, simultaneity is relative: two distant events might be at the same time in your reference frame, but for someone else one of them might be first, or the other one might be, there is no one universal answer.

Because of that, if you want to think about things happening one by one, cause following effect, actions causing consequences, then you can’t think of causes or actions as spread out in space. You have to think about what happens at a single point: the location of an imagined observer.

Once you have this concept, you can ask whether describing the world in terms of this single observer works just as well as describing it in terms of a wide open space. And indeed, it actually can do well, at least under certain conditions. But one again, this really isn’t how Hoffman is doing things: he has multiple observers all real at the same time, communicating with each other in a definite order.

In general, a lot of researchers in quantum gravity think spacetime is doomed. They think things are better described in terms of objects with other properties and interactions, with space and time as just convenient approximations for a more complicated reality. They get this both from observing properties of the theories we already have, and from thought experiments showing where those theories cause problems.

Nima, the most catchy of these quotable theorists, is approaching the problem from the direction of scattering amplitudes: the calculations we do to find the probability of observations in particle physics. Each scattering amplitude describes a single observation: what someone far away from a particle collision can measure, independent of any story of what might have “actually happened” to the particles in between. Nima’s goal is to describe these amplitudes purely in terms of those observations, to get rid of the “story” that shows up in the middle as much as possible.

The other theorists have different goals, but have this in common: they treat observables as their guide. They look at the properties that a single observer’s observations can have, and try to take a fresh view, independent of any assumptions about what happens in between.

This key perspective, this key insight, is what Hoffman is missing throughout this book. He has read what many physicists have to say, but he does not understand why they are saying it. His book is titled The Case Against Reality, but he merely trades one reality for another. He stops short of the more radical, more justified case against reality: that “reality”, that thing philosophers argue about and that makes us think we can rule out theories based on pure thought, is itself the wrong approach: that instead of trying to characterize an idealized real world, we are best served by focusing on what we can do.

One thing I didn’t do here is a full critique of Hoffman’s specific proposal, treating it as a proposed theory of physics. That would involve quite a bit more work, on top of what has turned out to be a very long book review. I would need to read not just his popular description, but the actual papers where he makes his case and lays out the relevant subtleties. Since I haven’t done that, I’ll end with a few questions: things that his proposal will need to answer if it aspires to be a useful idea for physics.

  • Are the networks of conscious agents he proposes Turing-complete? In other words, can they represent any calculation a computer can do? If so, they aren’t a useful idea for physics, because you could imagine a network of conscious agents to reproduce any theory you want. The idea wouldn’t narrow things down to get us closer to a useful truth. This was also one of the things that made me uncomfortable with the Wolfram Physics Project.
  • What are the conditions that allow a network of simple conscious agents to make up a bigger conscious agent? Do those conditions depend meaningfully on the network’s agents being conscious, or do they just have to pass messages? If the latter, then Hoffman is tacitly admitting you can make a conscious agent out of non-conscious agents, even if he insists this is philosophically impossible.
  • How do you square this network with relativity and quantum mechanics? Is there a set time, an order in which all the conscious agents communicate with each other? If so, how do you square that with the relativity of simultaneity? Are the agents themselves supposed to be able to be put in quantum states, or is quantum mechanics supposed to emerge from a theory of classical agents?
  • How does evolution fit in here? A bit part of Hoffman’s argument was supported by the universality of the evolutionary algorithm. In order for evolution to matter for your simplest agents, they need to be able to be created or destroyed. But then they have more than two actions: not just 0 and 1, but 0, 1, and cease to exist. So you could have an even simpler agent that has just two bits.

If That Measures the Quantum Vacuum, Anything Does

Sabine Hossenfelder has gradually transitioned from critical written content about physics to YouTube videos, mostly short science news clips with the occasional longer piece. Luckily for us in the unable-to-listen-to-podcasts demographic, the transcripts of these videos are occasionally published on her organization’s Substack.

Unluckily, it feels like the short news format is leading to some lazy metaphors. There are stories science journalists sometimes tell because they’re easy and familiar, even if they don’t really make sense. Scientists often tell them too, for the same reason. But the more careful voices avoid them.

Hossenfelder has been that careful before, but one of her recent pieces falls short. The piece is titled “This Experiment Will Measure Nothing, But Very Precisely”.

The “nothing” in the title is the oft-mythologized quantum vacuum. The story goes that in quantum theory, empty space isn’t really empty. It’s full of “virtual” particles, that pop in and out of existence, jostling things around.

This…is not a good way to think about it. Really, it’s not. If you want to understand what’s going on physically, it’s best to think about measurements, and measurements involve particles: you can’t measure anything in pure empty space, you don’t have anything to measure with. Instead, every story you can tell about the “quantum vacuum” and virtual particles, you can tell about interactions between particles that actually exist.

(That post I link above, by the way, was partially inspired by a more careful post by Hossenfelder. She does know this stuff. She just doesn’t always use it.)

Let me tell the story Hossenfelder’s piece is telling, in a less silly way:

In the earliest physics classes, you learn that light does not affect other light. Shine two flashlight beams across each other, and they’ll pass right through. You can trace the rays of each source, independently, keeping track of how they travel and bounce around the room.

In quantum theory, that’s not quite true. Light can interact with light, through subtle quantum effects. This effect is tiny, so tiny it hasn’t been measured before. But with ingenious tricks involving tuning three different lasers in exactly the right way, a team of physicists in Dresden has figured out how it could be done.

And see, that’s already cool, right? It’s cool when people figure out how to see things that have never been seen before, full stop.

But the way Hossenfelder presents it, the cool thing about this is that they are “measuring nothing”. That they’re measuring “the quantum vacuum”, really precisely.

And I mean, you can say that, I guess. But it’s equally true of every subtle quantum effect.

In classical physics, electrons should have a very specific behavior in a magnetic field, called their magnetic moment. Quantum theory changes this: electrons have a slightly different magnetic moment, an anomalous magnetic moment. And people have measured this subtle effect: it’s famously the most precisely confirmed prediction in all of science.

That effect can equally well be described as an effect of the quantum vacuum. You can draw the same pictures, if you really want to, with virtual particles popping in and out of the vacuum. One effect (light bouncing off light) doesn’t exist at all in classical physics, while the other (electrons moving in a magnetic field) exists, but is subtly different. But both, in exactly the same sense, are “measurements of nothing”.

So if you really want to stick on the idea that, whenever you measure any subtle quantum effect, you measure “the quantum vacuum”…then we’re already doing that, all the time. Using it to popularize some stuff (say, this experiment) and not other stuff (the LHC is also measuring the quantum vacuum) is just inconsistent.

Better, in my view, to skip the silly talk about nothing. Talk about what we actually measure. It’s cool enough that way.

IPhT’s 60-Year Anniversary

This year is the 60th anniversary of my new employer, the Institut de Physique Théorique of CEA Paris-Saclay (or IPhT for short). In celebration, they’re holding a short conference, with a variety of festivities. They’ve been rushing to complete a film about the institute, and I hear there’s even a vintage arcade game decorated with Feynman diagrams. For me, it will be a chance to learn a bit more about the history of this place, which I currently know shamefully little about.

(For example, despite having his textbook on my shelf, I don’t know much about what our Auditorium’s namesake Claude Itzykson was known for.)

Since I’m busy with the conference this week, I won’t have time for a long blog post. Next week I’ll be able to say more, and tell you what I learned!

Theorems About Reductionism

A reductionist would say that the behavior of the big is due to the behavior of the small. Big things are made up of small things, so anything the big things do must be explicable in terms of what the small things are doing. It may be very hard to explain things this way: you wouldn’t want to describe the economy in terms of motion of carbon atoms. But in principle, if you could calculate everything, you’d find the small things are enough: there are no fundamental “new rules” that only apply to big things.

A physicist reductionist would have to amend this story. Zoom in far enough, and it doesn’t really make sense to talk about “small things”, “big things”, or even “things” at all. The world is governed by interactions of quantum fields, ripples spreading and colliding and changing form. Some of these ripples (like the ones we call “protons”) are made up of smaller things…but ultimately most aren’t. They just are what they are.

Still, a physicist can rescue the idea of reductionism by thinking about renormalization. If you’ve heard of renormalization, you probably think of it as a trick physicists use to hide inconvenient infinite results in their calculations. But an arguably better way to think about it is as a kind of “zoom” dial for quantum field theories. Starting with a theory, we can use renormalization to “zoom out”, ignoring the smallest details and seeing what picture emerges. As we “zoom”, different forces will seem to get stronger or weaker: electromagnetism matters less when we zoom out, the strong nuclear force matters more.

(Why then, is electromagnetism so much more important in everyday life? The strong force gets so strong as we zoom out that we stop seeing individual particles, and only see them bound into protons and neutrons. Electromagnetism is like this too, binding electrons and protons into neutral atoms. In both cases, it can be better, once we’ve zoomed out, to use a new description: you don’t want to do chemistry keeping track of the quarks and gluons.)

A physicists reductionist then, would expect renormalization to always go “one way”. As we “zoom out”, we should find that our theories in a meaningful sense get simpler and simpler. Maybe they’re still hard to work with: it’s easier to think about gluons and quarks when zoomed in than the zoo of different nuclear particles we need to consider when zoomed out. But at each step, we’re ignoring some details. And if you’re a reductionist, you shouldn’t expect “zooming out” to show you anything truly fundamentally new.

Can you prove that, though?

Surprisingly, yes!

In 2011, Zohar Komargodski and Adam Schwimmer proved a result called the a-theorem. “The a-theorem” is probably the least google-able theorem in the universe, which has probably made it hard to popularize. It is named after a quantity, labeled “a”, that gives a particular way to add up energy in a quantum field theory. Komargodski and Schwimmer proved that, when you do the renormalization procedure and “zoom out”, then this quantity “a” will always get smaller.

Why does this say anything about reductionism?

Suppose you have a theory that violates reductionism. You zoom out, and see something genuinely new: a fact about big things that isn’t due to facts about small things. If you had a theory like that, then you could imagine “zooming in” again, and using your new fact about big things to predict something about the small things that you couldn’t before. You’d find that renormalization doesn’t just go “one way”: with new facts able to show up at every scale, zooming out isn’t necessarily ignoring more and zooming in isn’t necessarily ignoring less. It would depend on the situation which way the renormalization procedure would go.

The a-theorem puts a stop to this. It says that, when you “zoom out”, there is a number that always gets smaller. In some ways it doesn’t matter what that number is (as long as you’re not cheating and using the scale directly). In this case, it is a number that loosely counts “how much is going on” in a given space. And because it always decreases when you do renormalization, it means that renormalization can never “go backwards”. You can never renormalize back from your “zoomed out” theory to the “zoomed in” one.

The a-theorem, like every theorem, is based on assumptions. Here, the assumptions are mostly that quantum field theory works in the normal way, that the theory we’re dealing with is not a totally new type of theory instead. One assumption I find interesting is the assumption of locality, that no signals can travel faster than the speed of light. On a naive level, this makes a lot of sense to me. If you can send signals faster than light, then you can’t control your “zoom lens”. Physics in a small area might be changed by something happening very far away, so you can’t “zoom in” in a way that lets you keep including everything that could possibly be relevant. If you have signals that go faster than light, you could transmit information between different parts of big things without them having to “go through” small things first. You’d screw up reductionism, and have surprises show up on every scale.

Personally, I find it really cool that it’s possible to prove a theorem that says something about a seemingly philosophical topic like reductionism. Even with assumptions (and even with the above speculations about the speed of light), it’s quite interesting that one can say anything at all about this kind of thing from a physics perspective. I hope you find it interesting too!

Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

Not Made of Photons Either

If you know a bit about quantum physics, you might have heard that everything is made out of particles. Mass comes from Higgs particles, gravity from graviton particles, and light and electricity and magnetism from photon particles. The particles are the “quanta”, the smallest possible units of stuff.

This is not really how quantum physics works.

You might have heard (instead, or in addition), that light is both particle and wave. Maybe you’ve heard it said that it is both at the same time, or that it is one or the other, depending on how you look at it.

This is also not really how quantum physics works.

If you think that light is both a particle and a wave, you might get the impression there are only two options. This is better than thinking there is only one option, but still not really the truth. The truth is there are many options. It all depends on what you measure.

Suppose you have a particle collider, like the Large Hadron Collider at CERN. Sometimes, the particles you collide release photons. You surround the collision with particle detectors. When a photon hits them, these particle detectors amplify it, turning it into an electrical signal in a computer.

If you want to predict what those particle detectors see, you might put together a theory of photons. You’ll try to calculate the chance that you see some specific photon with some specific energy to some reasonable approximation…and you’ll get infinity.

You might think you’ve heard this story before. Maybe you’ve heard people talk about calculations in quantum field theory that give infinity, with buzzwords like divergences and renormalization. You may remember them saying that this is a sign that our theories are incomplete, that there are parameters we can’t predict or that the theory is just a low-energy approximation to a deeper theory.

This is not that story. That story is about “ultraviolet divergences”, infinities that come from high-energy particles. This story is about “infrared divergences” from low-energy particles. Infrared divergences don’t mean our theory is incomplete. Our theory is fine. We’re just using it wrong.

The problem is that I lied to you a little bit, earlier. I told you that your particle detectors can detect photons, so you might have imagined they can detect any photon you like. But that is impossible. A photon’s energy is determined by its wavelength: X-rays have more energy than UV light, which has more energy than IR light, which has more energy than microwaves. No matter how you build your particle detector, there will be some energy low enough that it cannot detect, a wavelength of photons that gives no response at all.

When you think you’re detecting just one photon, then, you’re not actually detecting just one photon. You’re detecting one photon, plus some huge number of undetectable photons that are too low-energy to see. We call these soft photons. You don’t know how many soft photons you generate, because you can’t detect them. Thus, as always in quantum mechanics, you have to add up every possibility.

That adding up is crucial, because it makes the infinite results go away. The different infinities pair up, negative and positive, at each order of approximation. Those pesky infrared divergences aren’t really a problem, provided you’re honest about what you’re actually detecting.

But while infrared divergences aren’t really a problem, they do say something about your model. You were modeling particles as single photons, and that made your calculations complicated, with a lot of un-physical infinite results. But you could, instead, have made another model. You could have modeled particles as dressed photons: one photon, plus a cloud of soft photons.

For a particle physicists, these dressed photons have advantages and disadvantages. They aren’t always the best tool, and can be complicated to use. But one thing they definitely do is avoid infinite results. You can interpret them a little more easily.

That ease, though, raises a question. You started out with a model in which each particle you detect was a photon. You could have imagined it as a model of reality, one in which every electromagnetic field was made up of photons.

But then you found another model, one which sometimes makes more sense. And in that model, instead, you model your particles as dressed photons. You could then once again imagine a model of reality, now with every electromagnetic field made up of dressed photons, not the ordinary ones.

So now it looks like you have three options. Are electromagnetic fields made out of waves, or particles…or dressed particles?

That’s a trick question. It was always a trick question, and will always be a trick question.

Ancient Greek philosophers argued about whether everything was made of water, or fire, or innumerable other things. Now, we teach children that science has found the answer: a world made of atoms, or protons, or quarks.

But scientists are actually answering a different, and much more important, question. “What is everything really made of?” is still a question for philosophers. We scientists want to know what we will observe. We want a model that makes predictions, that tells us what actions we can do and what results we should expect, that lets us develop technology and improve our lives.

And if we want to make those predictions, then our models can make different choices. We can arrange things in different ways, grouping the fluid possibilities of reality into different concrete “stuff”. We can choose what to measure, and how best to describe it. We don’t end up with one “what everything is made of”, but more than one, different stories for different contexts. As long as those models make the right predictions, we’ve done the only job we ever needed to do.

Cabinet of Curiosities: The Deluxe Train Set

I’ve got a new paper out this week with Andrew McLeod. I’m thinking of it as another entry in this year’s “cabinet of curiosities”, interesting Feynman diagrams with unusual properties. Although this one might be hard to fit into a cabinet.

Over the past few years, I’ve been finding Feynman diagrams with interesting connections to Calabi-Yau manifolds, the spaces originally studied by string theorists to roll up their extra dimensions. With Andrew and other collaborators, I found an interesting family of these diagrams called traintracks, which involve higher-and-higher dimensional manifolds as they get longer and longer.

This time, we started hooking up our traintracks together.

We call diagrams like these traintrack network diagrams, or traintrack networks for short. The original traintracks just went “one way”: one family, going higher in Calabi-Yau dimension the longer they got. These networks branch out, one traintrack leading to another and another.

In principle, these are much more complicated diagrams. But we find we can work with them in almost the same way. We can find the same “starting point” we had for the original traintracks, the set of integrals used to find the Calabi-Yau manifold. We’ve even got more reliable tricks, a method recently honed by some friends of ours that consistently find a Calabi-Yau manifold inside the original traintracks.

Surprisingly, though, this isn’t enough.

It works for one type of traintrack network, a so-called “cross diagram” like this:

But for other diagrams, if the network branches any more, the trick stops working. We still get an answer, but that answer is some more general space, not just a Calabi-Yau manifold.

That doesn’t mean that these general traintrack networks don’t involve Calabi-Yaus at all, mind you: it just means this method doesn’t tell us one way or the other. It’s also possible that simpler versions of these diagrams, involving fewer particles, will once again involve Calabi-Yaus. This is the case for some similar diagrams in two dimensions. But it’s starting to raise a question: how special are the Calabi-Yau related diagrams? How general do we expect them to be?

Another fun thing we noticed has to do with differential equations. There are equations that relate one diagram to another simpler one. We’ve used them in the past to build up “ladders” of diagrams, relating each picture to one with one of its boxes “deleted”. We noticed, playing with these traintrack networks, that these equations do a bit more than we thought. “Deleting” a box can make a traintrack short, but it can also chop a traintrack in half, leaving two “dangling” pieces, one on either side.

This reminded me of an important point, one we occasionally lose track of. The best-studied diagrams related to Calabi-Yaus are called “sunrise” diagrams. If you squish together a loop in one of those diagrams, the whole diagram squishes together, becoming much simpler. Because of that, we’re used to thinking of these as diagrams with a single “geometry”, one that shows up when you don’t “squish” anything.

Traintracks, and traintrack networks, are different. “Squishing” the diagram, or “deleting” a box, gives you a simpler diagram, but not much simpler. In particular, the new diagram will still contain traintracks, and traintrack networks. That means that we really should think of each traintrack network not just as one “top geometry”, but of a collection of geometries, different Calabi-Yaus that break into different combinations of Calabi-Yaus in different ways. It’s something we probably should have anticipated, but the form these networks take is a good reminder, one that points out that we still have a lot to do if we want to understand these diagrams.

What’s a Cosmic String?

Nowadays, we have telescopes that detect not just light, but gravitational waves. We’ve already learned quite a bit about astrophysics from these telescopes. They observe ripples coming from colliding black holes, giving us a better idea of what kinds of black holes exist in the universe. But the coolest thing a gravitational wave telescope could discover is something that hasn’t been seen yet: a cosmic string.

This art is from an article in Symmetry magazine which is, as far as I can tell, not actually about cosmic strings.

You might have heard of cosmic strings, but unless you’re a physicist you probably don’t know much about them. They’re a prediction, coming from cosmology, of giant string-like objects floating out in space.

That might sound like it has something to do with string theory, but it doesn’t actually have to, you can have these things without any string theory at all. Instead, you might have heard that cosmic strings are some kind of “cracks” or “wrinkles” in space-time. Some articles describe this as like what happens when ice freezes, cracks forming as water settles into a crystal.

That description, in terms of ice forming cracks between crystals, is great…if you’re a physicist who already knows how ice forms cracks between crystals. If you’re not, I’m guessing reading those kinds of explanations isn’t helpful. I’m guessing you’re still wondering why there ought to be any giant strings floating in space.

The real explanation has to do with a type of mathematical gadget physicists use, called a scalar field. You can think of a scalar field as described by a number, like a temperature, that can vary in space and time. The field carries potential energy, and that energy depends on what the scalar field’s “number” is. Left alone, the field settles into a situation with as little potential energy as it can, like a ball rolling down a hill. That situation is one of the field’s default values, something we call a “vacuum” value. Changing the field away from its vacuum value can take a lot of energy. The Higgs boson is one example of a scalar field. Its vacuum value is the value it has in day to day life. In order to make a detectable Higgs boson at the Large Hadron Collider, they needed to change the field away from its vacuum value, and that took a lot of energy.

In the very early universe, almost back at the Big Bang, the world was famously in a hot dense state. That hot dense state meant that there was a lot of energy to go around, so scalar fields could vary far from their vacuum values, pretty much randomly. As the universe expanded and cooled, there was less and less energy available for these fields, and they started to settle down.

Now, the thing about these default, “vacuum” values of a scalar field is that there doesn’t have to be just one of them. Depending on what kind of mathematical function the field’s potential energy is, there could be several different possibilities each with equal energy.

Let’s imagine a simple example, of a field with two vacuum values: +1 and -1. As the universe cooled down, some parts of the universe would end up with that scalar field number equal to +1, and some to -1. But what happens in between?

The scalar field can’t just jump from -1 to +1, that’s not allowed in physics. It has to pass through 0 in between. But, unlike -1 and +1, 0 is not a vacuum value. When the scalar field number is equal to 0, the field has more energy than it does when it’s equal to -1 or +1. Usually, a lot more energy.

That means the region of scalar field number 0 can’t spread very far: the further it spreads, the more energy it takes to keep it that way. On the other hand, the region can’t vanish altogether: something needs to happen to transition between the numbers -1 and +1.

The thing that happens is called a domain wall. A domain wall is a thin sheet, as thin as it can physically be, where the scalar field doesn’t take its vacuum value. You can roughly think of it as made up of the scalar field, a churning zone of the kind of bosons the LHC was trying to detect.

This sheet still has a lot of energy, bound up in the unusual value of the scalar field, like an LHC collision in every proton-sized chunk. As such, like any object with a lot of energy, it has a gravitational field. For a domain wall, the effect of this gravity would be very very dramatic: so dramatic, that we’re pretty sure they’re incredibly rare. If they were at all common, we would have seen evidence of them long before now!

Ok, I’ve shown you a wall, that’s weird, sure. What does that have to do with cosmic strings?

The number representing a scalar field doesn’t have to be a real number: it can be imaginary instead, or even complex. Now I’d like you to imagine a field with vacuum values on the unit circle, in the complex plane. That means that +1 and -1 are still vacuum values, but so are e^{i \pi/2}, and e^{3 i \pi/2}, and everything else you can write as e^{i\theta}. However, 0 is still not a vacuum value. Neither is, for example, 2 e^{i\pi/3}.

With vacuum values like this, you can’t form domain walls. You can make a path between -1 and +1 that only goes through the unit circle, through e^{i \pi/2} for example. The field will be at its vacuum value throughout, taking no extra energy.

However, imagine the different regions form a circle. In the picture above, suppose that the blue area at the bottom is at vacuum value -1 and red is at +1. You might have e^{i \pi/2} in the green region, and e^{3 i \pi/2} in the purple region, covering the whole circle smoothly as you go around.

Now, think about what happens in the middle of the circle. On one side of the circle, you have -1. On the other, +1. (Or, on one side e^{i \pi/2}, on the other, e^{3 i \pi/2}). No matter what, different sides of the circle are not allowed to be next to each other, you can’t just jump between them. So in the very middle of the circle, something else has to happen.

Once again, that something else is a field that goes away from its vacuum value, that passes through 0. Once again, that takes a lot of energy, so it occupies as little space as possible. But now, that space isn’t a giant wall. Instead, it’s a squiggly line: a cosmic string.

Cosmic strings don’t have as dramatic a gravitational effect as domain walls. That means they might not be super-rare. There might be some we haven’t seen yet. And if we do see them, it could be because they wiggle space and time, making gravitational waves.

Cosmic strings don’t require string theory, they come from a much more basic gadget, scalar fields. We know there is one quite important scalar field, the Higgs field. The Higgs vacuum values aren’t like +1 and -1, or like the unit circle, though, so the Higgs by itself won’t make domain walls or cosmic strings. But there are a lot of proposals for scalar fields, things we haven’t discovered but that physicists think might answer lingering questions in particle physics, and some of those could have the right kind of vacuum values to give us cosmic strings. Thus, if we manage to detect cosmic strings, we could learn something about one of those lingering questions.