Tag Archives: quantum gravity

On Stubbornness and Breaking Down

In physics, we sometimes say that an idea “breaks down”. What do we mean by that?

When a theory “breaks down”, we mean that it stops being accurate. Newton’s theory of gravity is excellent most of the time, but for objects under strong enough gravity or high enough speed its predictions stop matching reality and a new theory (relativity) is needed. This is the sense in which we say that Newtonian gravity breaks down for the orbit of mercury, or breaks down much more severely in the area around a black hole.

When a symmetry is “broken”, we mean that it stops holding true. Most of physics looks the same when you flip it in a mirror, a property called parity symmetry. Take a pile of electric and magnetic fields, currents and wires, and you’ll find their mirror reflection is also a perfectly reasonable pile of electric and magnetic fields, currents and wires. This isn’t true for all of physics, though: the weak nuclear force isn’t the same when you flip it in a mirror. We say that the weak force breaks parity symmetry.

What about when a more general “idea” breaks down? What about space-time?

In order for space-time to break down, there needs to be a good reason to abandon the idea. And depending on how stubborn you are about it, that reason can come at different times.

You might think of space-time as just Einstein’s theory of general relativity. In that case, you could say that space-time breaks down as soon as the world deviates from that theory. In that view, any modification to general relativity, no matter how small, corresponds to space-time breaking down. You can think of this as the “least stubborn” option, the one with barely any stubbornness at all, that will let space-time break down with a tiny nudge.

But if general relativity breaks down, a slightly more stubborn person could insist that space-time is still fine. You can still describe things as located at specific places and times, moving across curved space-time. They just obey extra forces, on top of those built into the space-time.

Such a person would be happy as long as general relativity was a good approximation of what was going on, but they might admit space-time has broken down when general relativity becomes a bad approximation. If there are only small corrections on top of the usual space-time picture, then space-time would be fine, but if those corrections got so big that they overwhelmed the original predictions of general relativity then that’s quite a different situation. In that situation, space-time may have stopped being a useful description, and it may be much better to describe the world in another way.

But we could imagine an even more stubborn person who still insists that space-time is fine. Ultimately, our predictions about the world are mathematical formulas. No matter how complicated they are, we can always subtract a piece off of those formulas corresponding to the predictions of general relativity, and call the rest an extra effect. That may be a totally useless thing to do that doesn’t help you calculate anything, but someone could still do it, and thus insist that space-time still hasn’t broken down.

To convince such a person, space-time would need to break down in a way that made some important concept behind it invalid. There are various ways this could happen, corresponding to different concepts. For example, one unusual proposal is that space-time is non-commutative. If that were true then, in addition to the usual Heisenberg uncertainty principle between position and momentum, there would be an uncertainty principle between different directions in space-time. That would mean that you can’t define the position of something in all directions at once, which many people would agree is an important part of having a space-time!

Ultimately, physics is concerned with practicality. We want our concepts not just to be definable, but to do useful work in helping us understand the world. Our stubbornness should depend on whether a concept, like space-time, is still useful. If it is, we keep it. But if the situation changes, and another concept is more useful, then we can confidently say that space-time has broken down.

The Problem of Quantum Gravity Is the Problem of High-Energy (Density) Quantum Gravity

I’ve said something like this before, but here’s another way to say it.

The problem of quantum gravity is one of the most famous problems in physics. You’ve probably heard someone say that quantum mechanics and general relativity are fundamentally incompatible. Most likely, this was narrated over pictures of a foaming, fluctuating grid of space-time. Based on that, you might think that all we have to do to solve this problem is to measure some quantum property of gravity. Maybe we could make a superposition of two different gravitational fields, see what happens, and solve the problem that way.

I mean, we could do that, some people are trying to. But it won’t solve the problem. That’s because the problem of quantum gravity isn’t just the problem of quantum gravity. It’s the problem of high-energy quantum gravity.

Merging quantum mechanics and general relativity is actually pretty easy. General relativity is a big conceptual leap, certainly, a theory in which gravity is really just the shape of space-time. At the same time, though, it’s also a field theory, the same general type of theory as electromagnetism. It’s a weirder field theory than electromagnetism, to be sure, one with deeper implications. But if we want to describe low energies, and weak gravitational fields, then we can treat it just like any other field theory. We know how to write down some pretty reasonable-looking equations, we know how to do some basic calculations with them. This part is just not that scary.

The scary part happens later. The theory we get from these reasonable-looking equations continues to look reasonable for a while. It gives formulas for the probability of things happening: things like gravitational waves bouncing off each other, as they travel through space. The problem comes when those waves have very high energy, and the nice reasonable probability formula now says that the probability is greater than one.

For those of you who haven’t taken a math class in a while, probabilities greater than one don’t make sense. A probability of one is a certainty, something guaranteed to happen. A probability greater than one isn’t more certain than certain, it’s just nonsense.

So we know something needs to change, we know we need a new theory. But we only know we need that theory when the energy is very high: when it’s the Planck energy. Before then, we might still have a different theory, but we might not: it’s not a “problem” yet.

Now, a few of you understand this part, but still have a misunderstanding. The Planck energy seems high for particle physics, but it isn’t high in an absolute sense: it’s about the energy in a tank of gasoline. Does that mean that all we have to do to measure quantum gravity is to make a quantum state out of your car?

Again, no. That’s because the problem of quantum gravity isn’t just the problem of high-energy quantum gravity either.

Energy seems objective, but it’s not. It’s subjective, or more specifically, relative. Due to special relativity, observers moving at different speeds observe different energies. Because of that, high energy alone can’t be the requirement: it isn’t something either general relativity or quantum field theory can “care about” by itself.

Instead, the real thing that matters is something that’s invariant under special relativity. This is hard to define in general terms, but it’s best to think of it as a requirement for not energy, but energy density.

(For the experts: I’m justifying this phrasing in part because of how you can interpret the quantity appearing in energy conditions as the energy density measured by an observer. This still isn’t the correct way to put it, but I can’t think of a better way that would be understandable to a non-technical reader. If you have one, let me know!)

Why do we need quantum gravity to fully understand black holes? Not just because they have a lot of mass, but because they have a lot of mass concentrated in a small area, a high energy density. Ditto for the Big Bang, when the whole universe had a very large energy density. Particle colliders are useful not just because they give particles high energy, but because they give particles high energy and put them close together, creating a situation with very high energy density.

Once you understand this, you can use it to think about whether some experiment or observation will help with the problem of quantum gravity. Does the experiment involve very high energy density, much higher than anything we can do in a particle collider right now? Is that telescope looking at something created in conditions of very high energy density, or just something nearby?

It’s not impossible for an experiment that doesn’t meet these conditions to find something. Whatever the correct quantum gravity theory is, it might be different from our current theories in a more dramatic way, one that’s easier to measure. But the only guarantee, the only situation where we know we need a new theory, is for very high energy density.

Simulated Wormhole Analogies

Last week, I talked about how Google’s recent quantum simulation of a toy model wormhole was covered in the press. What I didn’t say much about, was my own opinion of the result. Was the experiment important? Was it worth doing? Did it deserve the hype?

Here on this blog, I don’t like to get into those kinds of arguments. When I talk about public understanding of science, I share the same concerns as the journalists: we all want to prevent misunderstandings, and to spread a clearer picture. I can argue that some choices hurt the public understanding and some help it, and be reasonably confident that I’m saying something meaningful, something that would resonate with their stated values.

For the bigger questions, what goals science should have and what we should praise, I have much less of a foundation. We don’t all have a clear shared standard for which science is most important. There isn’t some premise I can posit, a fundamental principle I can use to ground a logical argument.

That doesn’t mean I don’t have an opinion, though. It doesn’t even mean I can’t persuade others of it. But it means the persuasion has to be a bit more loose. For example, I can use analogies.

So let’s say I’m looking at a result like this simulated wormhole. Researchers took advanced technology (Google’s quantum computer), and used it to model a simple system. They didn’t learn anything especially new about that system (since in this case, a normal computer can simulate it better). I get the impression they didn’t learn all that much about the advanced technology: the methods used, at this point, are pretty well-known, at least to Google. I also get the impression that it wasn’t absurdly expensive: I’ve seen other people do things of a similar scale with Google’s machine, and didn’t get the impression they had to pay through the nose for the privilege. Finally, the simple system simulated happens to be “cool”: it’s a toy model studied by quantum gravity researchers, a simple version of that sci-fi standard, the traversible wormhole.

What results are like that?

Occasionally, scientists build tiny things. If the tiny things are cute enough, or cool enough, they tend to get media attention. The most recent example I can remember was a tiny snowman, three microns tall. These tiny things tend to use very advanced technology, and it’s hard to imagine the scientists learn much from making them, but it’s also hard to imagine they cost all that much to make. They’re amusing, and they absolutely get press coverage, spreading wildly over the web. I don’t think they tend to get published in Nature unless they are a bit more advanced, but I wouldn’t be too surprised if I heard of a case that did, scientific journals can be suckers for cute stories too. They don’t tend to get discussed in glowing terms linking them to historical breakthroughs.

That seems like a pretty close analogy. Taken seriously, it would suggest the wormhole simulation was probably worth doing, probably worth a press release and some media coverage, likely not worth publication in Nature, and definitely not worth being heralded as a major breakthrough.

Ok, but proponents of the experiment might argue I’m leaving something out here. This experiment isn’t just a cute simulation. It’s supposed to be a proof of principle, an early version of an experiment that will be an actually useful simulation.

As an analogy for that…did you know LIGO started taking data in 2002?

Most people first heard of the Laser Interferometer Gravitational-Wave Observatory in 2016, when they reported their first detection of gravitational waves. But that was actually “advanced LIGO”. The original LIGO ran from 2002 to 2010, and didn’t detect anything. It just wasn’t sensitive enough. Instead, it was a prototype, an early version designed to test the basic concept.

Similarly, while this wormhole situation didn’t teach anything new, future ones might. If the quantum simulation was made larger, it might be possible to simulate more complicated toy models, ones that are too complicated to simulate on a normal computer. These aren’t feasible now, but may be feasible with somewhat bigger quantum computers: still much smaller than the computers that would be needed to break encryption, or even to do simulations that are useful for chemists and materials scientists. Proponents argue that some of these quantum toy models might teach them something interesting about the mathematics of quantum gravity.

Here, though, a number of things weaken the analogy.

LIGO’s first run taught them important things about the noise they would have to deal with, things that they used to build the advanced version. The wormhole simulation didn’t show anything novel about how to use a quantum computer: the type of thing they were doing was well-understood, even if it hadn’t been used to do that yet.

Detecting gravitational waves opened up a new type of astronomy, letting us observe things we could never have observed before. For these toy models, it isn’t obvious to me that the benefit is so unique. Future versions may be difficult to classically simulate, but it wouldn’t surprise me if theorists figured out how to understand them in other ways, or gained the same insight from other toy models and moved on to new questions. They’ll have a while to figure it out, because quantum computers aren’t getting bigger all that fast. I’m very much not an expert in this type of research, so maybe I’m wrong about this…but just comparing to similar research programs, I would be surprised if the quantum simulations end up crucial here.

Finally, even if the analogy held, I don’t think it proves very much. In particular, as far as I can tell, the original LIGO didn’t get much press. At the time, I remember meeting some members of the collaboration, and they clearly didn’t have the fame the project has now. Looking through google news and the archives of the New York times, I can’t find all that much about the experiment: a few articles discussing its progress and prospects, but no grand unveiling, no big press releases.

So ultimately, I think viewing the simulation as a proof of principle makes it, if anything, less worth the hype. A prototype like that is only really valuable when it’s testing new methods, and only in so far as the thing it’s a prototype for will be revolutionary. Recently, a prototype fusion device got a lot of press for getting more energy out of a plasma than they put into it (though still much less than it takes to run the machine). People already complained about that being overhyped, and the simulated wormhole is nowhere near that level of importance.

If anything, I think the wormhole-simulators would be on a firmer footing if they thought of their work like the tiny snowmen. It’s cute, a fun side benefit of advanced technology, and as such something worth chatting about and celebrating a bit. But it’s not the start of a new era.

Simulated Wormholes for My Real Friends, Real Wormholes for My Simulated Friends

Maybe you’ve recently seen a headline like this:

Actually, I’m more worried that you saw that headline before it was edited, when it looked like this:

If you’ve seen either headline, and haven’t read anything else about it, then please at least read this:

Physicists have not created an actual wormhole. They have simulated a wormhole on a quantum computer.

If you’re willing to read more, then read the rest of this post. There’s a more subtle story going on here, both about physics and about how we communicate it. And for the experts, hold on, because when I say the wormhole was a simulation I’m not making the same argument everyone else is.

[And for the mega-experts, there’s an edit later in the post where I soften that claim a bit.]

The headlines at the top of this post come from an article in Quanta Magazine. Quanta is a web-based magazine covering many fields of science. They’re read by the general public, but they aim for a higher standard than many science journalists, with stricter fact-checking and a goal of covering more challenging and obscure topics. Scientists in turn have tended to be quite happy with them: often, they cover things we feel are important but that the ordinary media isn’t able to cover. (I even wrote something for them recently.)

Last week, Quanta published an article about an experiment with Google’s Sycamore quantum computer. By arranging the quantum bits (qubits) in a particular way, they were able to observe behaviors one would expect out of a wormhole, a kind of tunnel linking different points in space and time. They published it with the second headline above, claiming that physicists had created a wormhole with a quantum computer and explaining how, using a theoretical picture called holography.

This pissed off a lot of physicists. After push-back, Quanta’s twitter account published this statement, and they added the word “Holographic” to the title.

Why were physicists pissed off?

It wasn’t because the Quanta article was wrong, per se. As far as I’m aware, all the technical claims they made are correct. Instead, it was about two things. One was the title, and the implication that physicists “really made a wormhole”. The other was the tone, the excited “breaking news” framing complete with a video comparing the experiment with the discovery of the Higgs boson. I’ll discuss each in turn:

The Title

Did physicists really create a wormhole, or did they simulate one? And why would that be at all confusing?

The story rests on a concept from the study of quantum gravity, called holography. Holography is the idea that in quantum gravity, certain gravitational systems like black holes are fully determined by what happens on a “boundary” of the system, like the event horizon of a black hole. It’s supposed to be a hologram in analogy to 3d images encoded in 2d surfaces, rather than like the hard-light constructions of science fiction.

The best-studied version of holography is something called AdS/CFT duality. AdS/CFT duality is a relationship between two different theories. One of them is a CFT, or “conformal field theory”, a type of particle physics theory with no gravity and no mass. (The first example of the duality used my favorite toy theory, N=4 super Yang-Mills.) The other one is a version of string theory in an AdS, or anti-de Sitter space, a version of space-time curved so that objects shrink as they move outward, approaching a boundary. (In the first example, this space-time had five dimensions curled up in a sphere and the rest in the anti-de Sitter shape.)

These two theories are conjectured to be “dual”. That means that, for anything that happens in one theory, you can give an alternate description using the other theory. We say the two theories “capture the same physics”, even though they appear very different: they have different numbers of dimensions of space, and only one has gravity in it.

Many physicists would claim that if two theories are dual, then they are both “equally real”. Even if one description is more familiar to us, both descriptions are equally valid. Many philosophers are skeptical, but honestly I think the physicists are right about this one. Philosophers try to figure out which things are real or not real, to make a list of real things and explain everything else as made up of those in some way. I think that whole project is misguided, that it’s clarifying how we happen to talk rather than the nature of reality. In my mind, dualities are some of the clearest evidence that this project doesn’t make any sense: two descriptions can look very different, but in a quite meaningful sense be totally indistinguishable.

That’s the sense in which Quanta and Google and the string theorists they’re collaborating with claim that physicists have created a wormhole. They haven’t created a wormhole in our own space-time, one that, were it bigger and more stable, we could travel through. It isn’t progress towards some future where we actually travel the galaxy with wormholes. Rather, they created some quantum system, and that system’s dual description is a wormhole. That’s a crucial point to remember: even if they created a wormhole, it isn’t a wormhole for you.

If that were the end of the story, this post would still be full of warnings, but the title would be a bit different. It was going to be “Dual Wormholes for My Real Friends, Real Wormholes for My Dual Friends”. But there’s a list of caveats. Most of them arguably don’t matter, but the last was what got me to change the word “dual” to “simulated”.

  1. The real world is not described by N=4 super Yang-Mills theory. N=4 super Yang-Mills theory was never intended to describe the real world. And while the real world may well be described by string theory, those strings are not curled up around a five-dimensional sphere with the remaining dimensions in anti-de Sitter space. We can’t create either theory in a lab either.
  2. The Standard Model probably has a quantum gravity dual too, see this cute post by Matt Strassler. But they still wouldn’t have been able to use that to make a holographic wormhole in a lab.
  3. Instead, they used a version of AdS/CFT with fewer dimensions. It relates a weird form of gravity in one space and one time dimension (called JT gravity), to a weird quantum mechanics theory called SYK, with an infinite number of quantum particles or qubits. This duality is a bit more conjectural than the original one, but still reasonably well-established.
  4. Quantum computers don’t have an infinite number of qubits, so they had to use a version with a finite number: seven, to be specific. They trimmed the model down so that it would still show the wormhole-dual behavior they wanted. At this point, you might say that they’re definitely just simulating the SYK theory, using a small number of qubits to simulate the infinite number. But I think they could argue that this system, too, has a quantum gravity dual. The dual would have to be even weirder than JT gravity, and even more conjectural, but the signs of wormhole-like behavior they observed (mostly through simulations on an ordinary computer, which is still better at this kind of thing than a quantum computer) could be seen as evidence that this limited theory has its own gravity partner, with its own “real dual” wormhole.
  5. But those seven qubits don’t just have the interactions they were programmed to have, the ones with the dual. They are physical objects in the real world, so they interact with all of the forces of the real world. That includes, though very weakly, the force of gravity.

And that’s where I think things break, and you have to call the experiment a simulation. You can argue, if you really want to, that the seven-qubit SYK theory has its own gravity dual, with its own wormhole. There are people who expect duality to be broad enough to include things like that.

But you can’t argue that the seven-qubit SYK theory, plus gravity, has its own gravity dual. Theories that already have gravity are not supposed to have gravity duals. If you pushed hard enough on any of the string theorists on that team, I’m pretty sure they’d admit that.

That is what decisively makes the experiment a simulation. It approximately behaves like a system with a dual wormhole, because you can approximately ignore gravity. But if you’re making some kind of philosophical claim, that you “really made a wormhole”, then “approximately” doesn’t cut it: if you don’t exactly have a system with a dual, then you don’t “really” have a dual wormhole: you’ve just simulated one.

Edit: mitchellporter in the comments points out something I didn’t know: that there are in fact proposals for gravity theories with gravity duals. They are in some sense even more conjectural than the series of caveats above, but at minimum my claim above, that any of the string theorists on the team would agree that the system’s gravity means it can’t have a dual, is probably false.

I think at this point, I’d soften my objection to the following:

Describing the system of qubits in the experiment as a limited version of the SYK theory is in one way or another an approximation. It approximates them as not having any interactions beyond those they programmed, it approximates them as not affected by gravity, and because it’s a quantum mechanical description it even approximates the speed of light as small. Those approximations don’t guarantee that the system doesn’t have a gravity dual. But in order for them to, then our reality, overall, would have to have a gravity dual. There would have to be a dual gravity interpretation of everything, not just the inside of Google’s quantum computer, and it would have to be exact, not just an approximation. Then the approximate SYK would be dual to an approximate wormhole, but that approximate wormhole would be an approximation of some “real” wormhole in the dual space-time.

That’s not impossible, as far as I can tell. But it piles conjecture upon conjecture upon conjecture, to the point that I don’t think anyone has explicitly committed to the whole tower of claims. If you want to believe that this experiment literally created a wormhole, you thus can, but keep in mind the largest asterisk known to mankind.

End edit.

If it weren’t for that caveat, then I would be happy to say that the physicists really created a wormhole. It would annoy some philosophers, but that’s a bonus.

But even if that were true, I wouldn’t say that in the title of the article.

The Title, Again

These days, people get news in two main ways.

Sometimes, people read full news articles. Reading that Quanta article is a good way to understand the background of the experiment, what was done and why people care about it. As I mentioned earlier, I don’t think anything said there was wrong, and they cover essentially all of the caveats you’d care about (except for that last one 😉 ).

Sometimes, though, people just see headlines. They get forwarded on social media, observed at a glance passed between friends. If you’re popular enough, then many more people will see your headline than will actually read the article. For many people, their whole understanding of certain scientific fields is formed by these glancing impressions.

Because of that, if you’re popular and news-y enough, you have to be especially careful with what you put in your headlines, especially when it implies a cool science fiction story. People will almost inevitably see them out of context, and it will impact their view of where science is headed. In this case, the headline may have given many people the impression that we’re actually making progress towards travel via wormholes.

Some of my readers might think this is ridiculous, that no-one would believe something like that. But as a kid, I did. I remember reading popular articles about wormholes, describing how you’d need energy moving in a circle, and other articles about optical physicists finding ways to bend light and make it stand still. Putting two and two together, I assumed these ideas would one day merge, allowing us to travel to distant galaxies faster than light.

If I had seen Quanta’s headline at that age, I would have taken it as confirmation. I would have believed we were well on the way to making wormholes, step by step. Even the New York Times headline, “the Smallest, Crummiest Wormhole You Can Imagine”, wouldn’t have fazed me.

(I’m not sure even the extra word “holographic” would have. People don’t know what “holographic” means in this context, and while some of them would assume it meant “fake”, others would think about the many works of science fiction, like Star Trek, where holograms can interact physically with human beings.)

Quanta has a high-brow audience, many of whom wouldn’t make this mistake. Nevertheless, I think Quanta is popular enough, and respectable enough, that they should have done better here.

At minimum, they could have used the word “simulated”. Even if they go on to argue in the article that the wormhole is real, and not just a simulation, the word in the title does no real harm. It would be a lie, but a beneficial “lie to children”, the basic stock-in-trade of science communication. I think they could have defended it to the string theorists they interviewed on those grounds.

The Tone

Honestly, I don’t think people would have been nearly so pissed off were it not for the tone of the article. There are a lot of physics bloggers who view themselves as serious-minded people, opposed to hype and publicity stunts. They view the research program aimed at simulating quantum gravity on a quantum computer as just an attempt to link a dying and un-rigorous research topic to an over-hyped and over-funded one, pompous storytelling aimed at promoting the careers of people who are already extremely successful.

These people tend to view Quanta favorably, because it covers serious-minded topics in a thorough way. And so many of them likely felt betrayed, seeing this Quanta article as a massive failure of that serious-minded-ness, falling for or even endorsing the hypiest of hype.

To those people, I’d like to politely suggest you get over yourselves.

Quanta’s goal is to cover things accurately, to represent all the facts in a way people can understand. But “how exciting something is” is not a fact.

Excitement is subjective. Just because most of the things Quanta finds exciting you also find exciting, does not mean that Quanta will find the things you find unexciting unexciting. Quanta is not on “your side” in some war against your personal notion of unexciting science, and you should never have expected it to be.

In fact, Quanta tends to find things exciting, in general. They were more excited than I was about the amplituhedron, and I’m an amplitudeologist. Part of what makes them consistently excited about the serious-minded things you appreciate them for is that they listen to scientists and get excited about the things they’re excited about. That is going to include, inevitably, things those scientists are excited about for what you think are dumb groupthinky hype reasons.

I think the way Quanta titled the piece was unfortunate, and probably did real damage. I think the philosophical claim behind the title is wrong, though for subtle and weird enough reasons that I don’t really fault anybody for ignoring them. But I don’t think the tone they took was a failure of journalistic integrity or research or anything like that. It was a matter of taste. It’s not my taste, it’s probably not yours, but we shouldn’t have expected Quanta to share our tastes in absolutely everything. That’s just not how taste works.

Trapped in the (S) Matrix

I’ve tried to convince you that you are a particle detector. You choose your experiment, what actions you take, and then observe the outcome. If you focus on that view of yourself, data out and data in, you start to wonder if the world outside really has any meaning. Maybe you’re just trapped in the Matrix.

From a physics perspective, you actually are trapped in a sort of a Matrix. We call it the S Matrix.

“S” stands for scattering. The S Matrix is a formula we use, a mathematical tool that tells us what happens when fundamental particles scatter: when they fly towards each other, colliding or bouncing off. For each action we could take, the S Matrix gives the probability of each outcome: for each pair of particles we collide, the chance we detect different particles at the end. You can imagine putting every possible action in a giant vector, and every possible observation in another giant vector. Arrange the probabilities for each action-observation pair in a big square grid, and that’s a matrix.

Actually, I lied a little bit. This is particle physics, and particle physics uses quantum mechanics. Because of that, the entries of the S Matrix aren’t probabilities: they’re complex numbers called probability amplitudes. You have to multiply them by their complex conjugate to get probability out.

Ok, that probably seemed like a lot of detail. Why am I telling you all this?

What happens when you multiply the whole S Matrix by its complex conjugate? (Using matrix multiplication, naturally.) You can still pick your action, but now you’re adding up every possible outcome. You’re asking “suppose I take an action. What’s the chance that anything happens at all?”

The answer to that question is 1. There is a 100% chance that something happens, no matter what you do. That’s just how probability works.

We call this property unitarity, the property of giving “unity”, or one. And while it may seem obvious, it isn’t always so easy. That’s because we don’t actually know the S Matrix formula most of the time. We have to approximate it, a partial formula that only works for some situations. And unitarity can tell us how much we can trust that formula.

Imagine doing an experiment trying to detect neutrinos, like the IceCube Neutrino Observatory. For you to detect the neutrinos, they must scatter off of electrons, kicking them off of their atoms or transforming them into another charged particle. You can then notice what happens as the energy of the neutrinos increases. If you do that, you’ll notice the probability also start to increase: it gets more and more likely that the neutrino can scatter an electron. You might propose a formula for this, one that grows with energy. [EDIT: Example changed after a commenter pointed out an issue with it.]

If you keep increasing the energy, though, you run into a problem. Those probabilities you predict are going to keep increasing. Eventually, you’ll predict a probability greater than one.

That tells you that your theory might have been fine before, but doesn’t work for every situation. There’s something you don’t know about, which will change your formula when the energy gets high. You’ve violated unitarity, and you need to fix your theory.

In this case, the fix is already known. Neutrinos and electrons interact due to another particle, called the W boson. If you include that particle, then you fix the problem: your probabilities stop going up and up, instead, they start slowing down, and stay below one.

For other theories, we don’t yet know the fix. Try to write down an S Matrix for colliding gravitational waves (or really, gravitons), and you meet the same kind of problem, a probability that just keeps growing. Currently, we don’t know how that problem should be solved: string theory is one answer, but may not be the only one.

So even if you’re trapped in an S Matrix, sending data out and data in, you can still use logic. You can still demand that probability makes sense, that your matrix never gives a chance greater than 100%. And you can learn something about physics when you do!

At New Ideas in Cosmology

The Niels Bohr Institute is hosting a conference this week on New Ideas in Cosmology. I’m no cosmologist, but it’s a pretty cool field, so as a local I’ve been sitting in on some of the talks. So far they’ve had a selection of really interesting speakers with quite a variety of interests, including a talk by Roger Penrose with his trademark hand-stippled drawings.

Including this old classic

One thing that has impressed me has been the “interdisciplinary” feel of the conference. By all rights this should be one “discipline”, cosmology. But in practice, each speaker came at the subject from a different direction. They all had a shared core of knowledge, common models of the universe they all compare to. But the knowledge they brought to the subject varied: some had deep knowledge of the mathematics of gravity, others worked with string theory, or particle physics, or numerical simulations. Each talk, aware of the varied audience, was a bit “colloquium-style“, introducing a framework before diving in to the latest research. Each speaker knew enough to talk to the others, but not so much that they couldn’t learn from them. It’s been unexpectedly refreshing, a real interdisciplinary conference done right.

Duality and Emergence: When Is Spacetime Not Spacetime?

Spacetime is doomed! At least, so say some physicists. They don’t mean this as a warning, like some comic-book universe-destroying disaster, but rather as a research plan. These physicists believe that what we think of as space and time aren’t the full story, but that they emerge from something more fundamental, so that an ultimate theory of nature might not use space or time at all. Other, grumpier physicists are skeptical. Joined by a few philosophers, they think the “spacetime is doomed” crowd are over-excited and exaggerating the implications of their discoveries. At the heart of the argument is the distinction between two related concepts: duality and emergence.

In physics, sometimes we find that two theories are actually dual: despite seeming different, the patterns of observations they predict are the same. Some of the more popular examples are what we call holographic theories. In these situations, a theory of quantum gravity in some space-time is dual to a theory without gravity describing the edges of that space-time, sort of like how a hologram is a 2D image that looks 3D when you move it. For any question you can ask about the gravitational “bulk” space, there is a matching question on the “boundary”. No matter what you observe, neither description will fail.

If theories with gravity can be described by theories without gravity, does that mean gravity doesn’t really exist? If you’re asking that question, you’re asking whether gravity is emergent. An emergent theory is one that isn’t really fundamental, but instead a result of the interaction of more fundamental parts. For example, hydrodynamics, the theory of fluids like water, emerges from more fundamental theories that describe the motion of atoms and molecules.

(For the experts: I, like most physicists, am talking about “weak emergence” here, not “strong emergence”.)

The “spacetime is doomed” crowd think that not just gravity, but space-time itself is emergent. They expect that distances and times aren’t really fundamental, but a result of relationships that will turn out to be more fundamental, like entanglement between different parts of quantum fields. As evidence, they like to bring up dualities where the dual theories have different concepts of gravity, number of dimensions, or space-time. Using those theories, they argue that space and time might “break down”, and not be really fundamental.

(I’ve made arguments like that in the past too.)

The skeptics, though, bring up an important point. If two theories are really dual, then no observation can distinguish them: they make exactly the same predictions. In that case, say the skeptics, what right do you have to call one theory more fundamental than the other? You can say that gravity emerges from a boundary theory without gravity, but you could just as easily say that the boundary theory emerges from the gravity theory. The whole point of duality is that no theory is “more true” than the other: one might be more or less convenient, but both describe the same world. If you want to really argue for emergence, then your “more fundamental” theory needs to do something extra: to predict something that your emergent theory doesn’t predict.

Sometimes this is a fair objection. There are members of the “spacetime is doomed” crowd who are genuinely reckless about this, who’ll tell a journalist about emergence when they really mean duality. But many of these people are more careful, and have thought more deeply about the question. They tend to have some mix of these two perspectives:

First, if two descriptions give the same results, then do the descriptions matter? As physicists, we have a history of treating theories as the same if they make the same predictions. Space-time itself is a result of this policy: in the theory of relativity, two people might disagree on which one of two events happened first or second, but they will agree on the overall distance in space-time between the two. From this perspective, a duality between a bulk theory and a boundary theory isn’t evidence that the bulk theory emerges from the boundary, but it is evidence that both the bulk and boundary theories should be replaced by an “overall theory”, one that treats bulk and boundary as irrelevant descriptions of the same physical reality. This perspective is similar to an old philosophical theory called positivism: that statements are meaningless if they cannot be derived from something measurable. That theory wasn’t very useful for philosophers, which is probably part of why some philosophers are skeptics of “space-time is doomed”. The perspective has been quite useful to physicists, though, so we’re likely to stick with it.

Second, some will say that it’s true that a dual theory is not an emergent theory…but it can be the first step to discover one. In this perspective, dualities are suggestive evidence that a deeper theory is waiting in the wings. The idea would be that one would first discover a duality, then discover situations that break that duality: examples on one side that don’t correspond to anything sensible on the other. Maybe some patterns of quantum entanglement are dual to a picture of space-time, but some are not. (Closer to my sub-field, maybe there’s an object like the amplituhedron that doesn’t respect locality or unitarity.) If you’re lucky, maybe there are situations, or even experiments, that go from one to the other: where the space-time description works until a certain point, then stops working, and only the dual description survives. Some of the models of emergent space-time people study are genuinely of this type, where a dimension emerges in a theory that previously didn’t have one. (For those of you having a hard time imagining this, read my old post about “bubbles of nothing”, then think of one happening in reverse.)

It’s premature to say space-time is doomed, at least as a definite statement. But it is looking like, one way or another, space-time won’t be the right picture for fundamental physics. Maybe that’s because it’s equivalent to another description, redundant embellishment on an essential theoretical core. Maybe instead it breaks down, and a more fundamental theory could describe more situations. We don’t know yet. But physicists are trying to figure it out.

The arXiv SciComm Challenge

Fellow science communicators, think you can explain everything that goes on in your field? If so, I have a challenge for you. Pick a day, and go through all the new papers on arXiv.org in a single area. For each one, try to give a general-audience explanation of what the paper is about. To make it easier, you can ignore cross-listed papers. If your field doesn’t use arXiv, consider if you can do the challenge with another appropriate site.

I’ll start. I’m looking at papers in the “High Energy Physics – Theory” area, announced 6 Jan, 2022. I’ll warn you in advance that I haven’t read these papers, just their abstracts, so apologies if I get your paper wrong!

arXiv:2201.01303 : Holographic State Complexity from Group Cohomology

This paper says it is a contribution to a Proceedings. That means it is based on a talk given at a conference. In my field, a talk like this usually won’t be presenting new results, but instead summarizes results in a previous paper. So keep that in mind.

There is an idea in physics called holography, where two theories are secretly the same even though they describe the world with different numbers of dimensions. Usually this involves a gravitational theory in a “box”, and a theory without gravity that describes the sides of the box. The sides turn out to fully describe the inside of the box, much like a hologram looks 3D but can be printed on a flat sheet of paper. Using this idea, physicists have connected some properties of gravity to properties of the theory on the sides of the box. One of those properties is complexity: the complexity of the theory on the sides of the box says something about gravity inside the box, in particular about the size of wormholes. The trouble is, “complexity” is a bit subjective: it’s not clear how to give a good definition for it for this type of theory. In this paper, the author studies a theory with a precise mathematical definition, called a topological theory. This theory turns out to have mathematical properties that suggest a well-defined notion of complexity for it.

arXiv:2201.01393 : Nonrelativistic effective field theories with enhanced symmetries and soft behavior

We sometimes describe quantum field theory as quantum mechanics plus relativity. That’s not quite true though, because it is possible to define a quantum field theory that doesn’t obey special relativity, a non-relativistic theory. Physicists do this if they want to describe a system moving much slower than the speed of light: it gets used sometimes for nuclear physics, and sometimes for modeling colliding black holes.

In particle physics, a “soft” particle is one with almost no momentum. We can classify theories based on how they behave when a particle becomes more and more soft. In normal quantum field theories, if they have special behavior when a particle becomes soft it’s often due to a symmetry of the theory, where the theory looks the same even if something changes. This paper shows that this is not true for non-relativistic theories: they have more requirements to have special soft behavior, not just symmetry. They “bootstrap” a few theories, using some general restrictions to find them without first knowing how they work (“pulling them up by their own bootstraps”), and show that the theories they find are in a certain sense unique, the only theories of that kind.

arXiv:2201.01552 : Transmutation operators and expansions for 1-loop Feynman integrands

In recent years, physicists in my sub-field have found new ways to calculate the probability that particles collide. One of these methods describes ordinary particles in a way resembling string theory, and from this discovered a whole “web” of theories that were linked together by small modifications of the method. This method originally worked only for the simplest Feynman diagrams, the “tree” diagrams that correspond to classical physics, but was extended to the next-simplest diagrams, diagrams with one “loop” that start incorporating quantum effects.

This paper concerns a particular spinoff of this method, that can find relationships between certain one-loop calculations in a particularly efficient way. It lets you express calculations of particle collisions in a variety of theories in terms of collisions in a very simple theory. Unlike the original method, it doesn’t rely on any particular picture of how these collisions work, either Feynman diagrams or strings.

arXiv:2201.01624 : Moduli and Hidden Matter in Heterotic M-Theory with an Anomalous U(1) Hidden Sector

In string theory (and its more sophisticated cousin M theory), our four-dimensional world is described as a world with more dimensions, where the extra dimensions are twisted up so that they cannot be detected. The shape of the extra dimensions influences the kinds of particles we can observe in our world. That shape is described by variables called “moduli”. If those moduli are stable, then the properties of particles we observe would be fixed, otherwise they would not be. In general it is a challenge in string theory to stabilize these moduli and get a world like what we observe.

This paper discusses shapes that give rise to a “hidden sector”, a set of particles that are disconnected from the particles we know so that they are hard to observe. Such particles are often proposed as a possible explanation for dark matter. This paper calculates, for a particular kind of shape, what the masses of different particles are, as well as how different kinds of particles can decay into each other. For example, a particle that causes inflation (the accelerating expansion of the universe) can decay into effects on the moduli and dark matter. The paper also shows how some of the moduli are made stable in this picture.

arXiv:2201.01630 : Chaos in Celestial CFT

One variant of the holography idea I mentioned earlier is called “celestial” holography. In this picture, the sides of the box are an infinite distance away: a “celestial sphere” depicting the angles particles go after they collide, in the same way a star chart depicts the angles between stars. Recent work has shown that there is something like a sensible theory that describes physics on this celestial sphere, that contains all the information about what happens inside.

This paper shows that the celestial theory has a property called quantum chaos. In physics, a theory is said to be chaotic if it depends very precisely on its initial conditions, so that even a small change will result in a large change later (the usual metaphor is a butterfly flapping its wings and causing a hurricane). This kind of behavior appears to be present in this theory.

arXiv:2201.01657 : Calculations of Delbrück scattering to all orders in αZ

Delbrück scattering is an effect where the nuclei of heavy elements like lead can deflect high-energy photons, as a consequence of quantum field theory. This effect is apparently tricky to calculate, and previous calculations have involved approximations. This paper finds a way to calculate the effect without those approximations, which should let it match better with experiments.

(As an aside, I’m a little confused by the claim that they’re going to all orders in αZ when it looks like they just consider one-loop diagrams…but this is probably just my ignorance, this is a corner of the field quite distant from my own.)

arXiv:2201.01674 : On Unfolded Approach To Off-Shell Supersymmetric Models

Supersymmetry is a relationship between two types of particles: fermions, which typically make up matter, and bosons, which are usually associated with forces. In realistic theories this relationship is “broken” and the two types of particles have different properties, but theoretical physicists often study models where supersymmetry is “unbroken” and the two types of particles have the same mass and charge. This paper finds a new way of describing some theories of this kind that reorganizes them in an interesting way, using an “unfolded” approach in which aspects of the particles that would normally be combined are given their own separate variables.

(This is another one I don’t know much about, this is the first time I’d heard of the unfolded approach.)

arXiv:2201.01679 : Geometric Flow of Bubbles

String theorists have conjectured that only some types of theories can be consistently combined with a full theory of quantum gravity, others live in a “swampland” of non-viable theories. One set of conjectures characterizes this swampland in terms of “flows” in which theories with different geometry can flow in to each other. The properties of these flows are supposed to be related to which theories are or are not in the swampland.

This paper writes down equations describing these flows, and applies them to some toy model “bubble” universes.

arXiv:2201.01697 : Graviton scattering amplitudes in first quantisation

This paper is a pedagogical one, introducing graduate students to a topic rather than presenting new research.

Usually in quantum field theory we do something called “second quantization”, thinking about the world not in terms of particles but in terms of fields that fill all of space and time. However, sometimes one can instead use “first quantization”, which is much more similar to ordinary quantum mechanics. There you think of a single particle traveling along a “world-line”, and calculate the probability it interacts with other particles in particular ways. This approach has recently been used to calculate interactions of gravitons, particles related to the gravitational field in the same way photons are related to the electromagnetic field. The approach has some advantages in terms of simplifying the results, which are described in this paper.

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Amplitudes 2021 Retrospective

Phew!

The conference photo

Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.

Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.

A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.

That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.

The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work with Laporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.

In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.

A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.

If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.

On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.

There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.

We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!