Tag Archives: string theory

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

Post on the Weak Gravity Conjecture for FirstPrinciples.org

I have another piece this week on the FirstPrinciples.org Hub. If you’d like to know who they are, I say a bit about my impressions of them in my post on the last piece I had there. They’re still finding their niche, so there may be shifts in the kind of content they cover over time, but for now they’ve given me an opportunity to cover a few topics that are off the beaten path.

This time, the piece is what we in the journalism biz call an “explainer”. Instead of interviewing people about cutting-edge science, I wrote a piece to explain an older idea. It’s an idea that’s pretty cool, in a way I think a lot of people can actually understand: a black hole puzzle that might explain why gravity is the weakest force. It’s an idea that’s had an enormous influence, both in the string theory world where it originated and on people speculating more broadly about the rules of quantum gravity. If you want to learn more, read the piece!

Since I didn’t interview anyone for this piece, I don’t have the same sort of “bonus content” I sometimes give. Instead of interviewing, I brushed up on the topic, and the best resource I found was this review article written by Dan Harlow, Ben Heidenreich, Matthew Reece, and Tom Rudelius. It gave me a much better idea of the subtleties: how many different ways there are to interpret the original conjecture, and how different attempts to build on it reflect on different facets and highlight different implications. If you are a physicist curious what the whole thing is about, I recommend reading that review: while I try to give a flavor of some of the subtleties, a piece for a broad audience can only do so much.

I Have a Theory

“I have a theory,” says the scientist in the book. But what does that mean? What does it mean to “have” a theory?

First, there’s the everyday sense. When you say “I have a theory”, you’re talking about an educated guess. You think you know why something happened, and you want to check your idea and get feedback. A pedant would tell you you don’t really have a theory, you have a hypothesis. It’s “your” hypothesis, “your theory”, because it’s what you think happened.

The pedant would insist that “theory” means something else. A theory isn’t a guess, even an educated guess. It’s an explanation with evidence, tested and refined in many different contexts in many different ways, a whole framework for understanding the world, the most solid knowledge science can provide. Despite the pedant’s insistence, that isn’t the only way scientists use the word “theory”. But it is a common one, and a central one. You don’t really “have” a theory like this, though, except in the sense that we all do. These are explanations with broad consensus, things you either know of or don’t, they don’t belong to one person or another.

Except, that is, if one person takes credit for them. We sometimes say “Darwin’s theory of evolution”, or “Einstein’s theory of relativity”. In that sense, we could say that Einstein had a theory, or that Darwin had a theory.

Sometimes, though, “theory” doesn’t mean this standard official definition, even when scientists say it. And that changes what it means to “have” a theory.

For some researchers, a theory is a lens with which to view the world. This happens sometimes in physics, where you’ll find experts who want to think about a situation in terms of thermodynamics, or in terms of a technique called Effective Field Theory. It happens in mathematics, where some choose to analyze an idea with category theory not to prove new things about it, but just to translate it into category theory lingo. It’s most common, though, in the humanities, where researchers often specialize in a particular “interpretive framework”.

For some, a theory is a hypothesis, but also a pet project. There are physicists who come up with an idea (maybe there’s a variant of gravity with mass! maybe dark energy is changing!) and then focus their work around that idea. That includes coming up with ways to test whether the idea is true, showing the idea is consistent, and understanding what variants of the idea could be proposed. These ideas are hypotheses, in that they’re something the scientist thinks could be true. But they’re also ideas with many moving parts that motivate work by themselves.

Taken to the extreme, this kind of “having” a theory can go from healthy science to political bickering. Instead of viewing an idea as a hypothesis you might or might not confirm, it can become a platform to fight for. Instead of investigating consistency and proposing tests, you focus on arguing against objections and disproving your rivals. This sometimes happens in science, especially in more embattled areas, but it happens much more often with crackpots, where people who have never really seen science done can decide it’s time for their idea, right or wrong.

Finally, sometimes someone “has” a theory that isn’t a hypothesis at all. In theoretical physics, a “theory” can refer to a complete framework, even if that framework isn’t actually supposed to describe the real world. Some people spend time focusing on a particular framework of this kind, understanding its properties in the hope of getting broader insights. By becoming an expert on one particular theory, they can be said to “have” that theory.

Bonus question: in what sense do string theorists “have” string theory?

You might imagine that string theory is an interpretive framework, like category theory, with string theorists coming up with the “string version” of things others understand in other ways. This, for the most part, doesn’t happen. Without knowing whether string theory is true, there isn’t much benefit in just translating other things to string theory terms, and people for the most part know this.

For some, string theory is a pet project hypothesis. There is a community of people who try to get predictions out of string theory, or who investigate whether string theory is consistent. It’s not a huge number of people, but it exists. A few of these people can get more combative, or make unwarranted assumptions based on dedication to string theory in particular: for example, you’ll see the occasional argument that because something is difficult in string theory it must be impossible in any theory of quantum gravity. You see a spectrum in the community, from people for whom string theory is a promising project to people for whom it is a position that needs to be defended and argued for.

For the rest, the question of whether string theory describes the real world takes a back seat. They’re people who “have” string theory in the sense that they’re experts, and they use the theory primarily as a mathematical laboratory to learn broader things about how physics works. If you ask them, they might still say that they hypothesize string theory is true. But for most of these people, that question isn’t central to their work.

Bonus Material for “How Hans Bethe Stumbled Upon Perfect Quantum Theories”

I had an article last week in Quanta Magazine. It’s a piece about something called the Bethe ansatz, a method in mathematical physics that was discovered by Hans Bethe in the 1930’s, but which only really started being understood and appreciated around the 1960’s. Since then it’s become a key tool, used in theoretical investigations in areas from condensed matter to quantum gravity. In this post, I thought I’d say a bit about the story behind the piece and give some bonus material that didn’t fit.

When I first decided to do the piece I reached out to Jules Lamers. We were briefly office-mates when I worked in France, where he was giving a short course on the Bethe ansatz and the methods that sprung from it. It turned out he had also been thinking about writing a piece on the subject, and we considered co-writing for a bit, but that didn’t work for Quanta. He helped me a huge amount with understanding the history of the subject and tracking down the right sources. If you’re a physicist who wants to learn about these things, I recommend his lecture notes. And if you’re a non-physicist who wants to know more, I hope he gets a chance to write a longer popular-audience piece on the topic!

If you clicked through to Jules’s lecture notes, you’d see the word “Bethe ansatz” doesn’t appear in the title. Instead, you’d see the phrase “quantum integrability”. In classical physics, an “integrable” system is one where you can calculate what will happen by doing an integral, essentially letting you “solve” any problem completely. Systems you can describe with the Bethe ansatz are solvable in a more complicated quantum sense, so they get called “quantum integrable”. There’s a whole research field that studies these quantum integrable systems.

My piece ended up rushing through the history of the field. After talking about Bethe’s original discovery, I jumped ahead to ice. The Bethe ansatz was first used to think about ice in the 1960’s, but the developments I mentioned leading up to it, where experimenters noticed extra variability and theorists explained it with the positions of hydrogen atoms, happened earlier, in the 1930’s. (Thanks to the commenter who pointed out that this was confusing!) Baxter gets a starring role in this section and had an important role in tying things together, but other people (Lieb and Sutherland) were involved earlier, showing that the Bethe ansatz indeed could be used with thin sheets of ice. This era had a bunch of other big names that I didn’t have space to talk about: C. N. Yang makes an appearance, and while Faddeev comes up later, I didn’t mention that he had a starring role in the 1970’s in understanding the connection to classical integrability and proposing a mathematical structure to understand what links all these different integrable theories together.

I vaguely gestured at black holes and quantum gravity, but didn’t have space for more than that. The connection there is to a topic you might have heard of before if you’ve read about string theory, called AdS/CFT, a connection between two kinds of world that are secretly the same: a toy model of gravity called Anti-de Sitter space (AdS) and a theory without gravity that looks the same at any scale (called a Conformal Field Theory, or CFT). It turns out that in the most prominent example of this, the theory without gravity is integrable! In fact, it’s a theory I spent a lot of time working with back in my research days, called N=4 super Yang-Mills. This theory is kind of like QCD, and in some sense it has integrability for similar reasons to those that Feynman hoped for and Korchemsky and Faddeev found. But it actually goes much farther, outside of the high-energy approximation where Korchemsky and Faddeev’s result works, and in principle seems to include everything you might want to know about the theory. Nowadays, people are using it to investigate the toy model of quantum gravity, hoping to get insights about quantum gravity in general.

One thing I didn’t get a chance to mention at all is the connection to quantum computing. People are trying to build a quantum computer with carefully-cooled atoms. It’s important to test whether the quantum computer functions well enough, or if the quantum states aren’t as perfect as they need to be. One way people have been testing this is with the Bethe ansatz: because it lets you calculate the behavior of special systems perfectly, you can set up your quantum computer to model a Bethe ansatz, and then check how close to the prediction your results are. You know that the theoretical result is complete, so any failure has to be due to an imperfection in your experiment.

I gave a quick teaser to a very active field, one that has fascinated a lot of prominent physicists and been applied in a wide variety of areas. I hope I’ve inspired you to learn more!

How Small Scales Can Matter for Large Scales

For a certain type of physicist, nothing matters more than finding the ultimate laws of nature for its tiniest building-blocks, the rules that govern quantum gravity and tell us where the other laws of physics come from. But because they know very little about those laws at this point, they can predict almost nothing about observations on the larger distance scales we can actually measure.

“Almost nothing” isn’t nothing, though. Theoretical physicists don’t know nature’s ultimate laws. But some things about them can be reasonably guessed. The ultimate laws should include a theory of quantum gravity. They should explain at least some of what we see in particle physics now, explaining why different particles have different masses in terms of a simpler theory. And they should “make sense”, respecting cause and effect, the laws of probability, and Einstein’s overall picture of space and time.

All of these are assumptions, of course. Further assumptions are needed to derive any testable consequences from them. But a few communities in theoretical physics are willing to take the plunge, and see what consequences their assumptions have.

First, there’s the Swampland. String theorists posit that the world has extra dimensions, which can be curled up in a variety of ways to hide from view, with different observable consequences depending on how the dimensions are curled up. This list of different observable consequences is referred to as the Landscape of possibilities. Based on that, some string theorists coined the term “Swampland” to represent an area outside the Landscape, containing observations that are incompatible with quantum gravity altogether, and tried to figure out what those observations would be.

In principle, the Swampland includes the work of all the other communities on this list, since a theory of quantum gravity ought to be consistent with other principles as well. In practice, people who use the term focus on consequences of gravity in particular. The earliest such ideas argued from thought experiments with black holes, finding results that seemed to demand that gravity be the weakest force for at least one type of particle. Later researchers would more frequently use string theory as an example, looking at what kinds of constructions people had been able to make in the Landscape to guess what might lie outside of it. They’ve used this to argue that dark energy might be temporary, and to try to figure out what traits new particles might have.

Second, I should mention naturalness. When talking about naturalness, people often use the analogy of a pen balanced on its tip. While possible in principle, it must have been set up almost perfectly, since any small imbalance would cause it to topple, and that perfection demands an explanation. Similarly, in particle physics, things like the mass of the Higgs boson and the strength of dark energy seem to be carefully balanced, so that a small change in how they were set up would lead to a much heavier Higgs boson or much stronger dark energy. The need for an explanation for the Higgs’ careful balance is why many physicists expected the Large Hadron Collider to discover additional new particles.

As I’ve argued before, this kind of argument rests on assumptions about the fundamental laws of physics. It assumes that the fundamental laws explain the mass of the Higgs, not merely by giving it an arbitrary number but by showing how that number comes from a non-arbitrary physical process. It also assumes that we understand well how physical processes like that work, and what kinds of numbers they can give. That’s why I think of naturalness as a type of argument, much like the Swampland, that uses the smallest scales to constrain larger ones.

Third is a host of constraints that usually go together: causality, unitarity, and positivity. Causality comes from cause and effect in a relativistic universe. Because two distant events can appear to happen in different orders depending on how fast you’re going, any way to send signals faster than light is also a way to send signals back in time, causing all of the paradoxes familiar from science fiction. Unitarity comes from quantum mechanics. If quantum calculations are supposed to give the probability of things happening, those probabilities should make sense as probabilities: for example, they should never go above one.

You might guess that almost any theory would satisfy these constraints. But if you extend a theory to the smallest scales, some theories that otherwise seem sensible end up failing this test. Actually linking things up takes other conjectures about the mathematical form theories can have, conjectures that seem more solid than the ones underlying Swampland and naturalness constraints but that still can’t be conclusively proven. If you trust the conjectures, you can derive restrictions, often called positivity constraints when they demand that some set of observations is positive. There has been a renaissance in this kind of research over the last few years, including arguments that certain speculative theories of gravity can’t actually work.

Which String Theorists Are You Complaining About?

Do string theorists have an unfair advantage? Do they have an easier time getting hired, for example?

In one of the perennial arguments about this on Twitter, Martin Bauer posted a bar chart of faculty hires in the US by sub-field. The chart was compiled by Erich Poppitz from data in the US particle physics rumor mill, a website where people post information about who gets hired where for the US’s quite small number of permanent theoretical particle physics positions at research universities and national labs. The data covers 1994 to 2017, and shows one year, 1999, when there were more string theorists hired than all other topics put together. The years around then also had many string theorists hired, but the proportion starts falling around the mid 2000’s…around when Lee Smolin wrote a book, The Trouble With Physics, arguing that string theorists had strong-armed their way into academic dominance. After that, the percentage of string theorists falls, oscillating between a tenth and a quarter of total hires.

Judging from that, you get the feeling that string theory’s critics are treating a temporary hiring fad as if it was a permanent fact. The late 1990’s were a time of high-profile developments in string theory that excited a lot of people. Later, other hiring fads dominated, often driven by experiments: I remember when the US decided to prioritize neutrino experiments and neutrino theorists had a much easier time getting hired, and there seem to be similar pushes now with gravitational waves, quantum computing, and AI.

Thinking about the situation in this way, though, ignores what many of the critics have in mind. That’s because the “string” column on that bar chart is not necessarily what people think of when they think of string theory.

If you look at the categories on Poppitz’s bar chart, you’ll notice something odd. “String” its itself a category. Another category, “lattice”, refers to lattice QCD, a method to find the dynamics of quarks numerically. The third category, though, is a combination of three things “ph/th/cosm”.

“Cosm” here refers to cosmology, another sub-field. “Ph” and “th” though aren’t really sub-fields. Instead, they’re arXiv categories, sections of the website arXiv.org where physicists post papers before they submit them to journals. The “ph” category is used for phenomenology, the type of theoretical physics where people try to propose models of the real world and make testable predictions. The “th” category is for “formal theory”, papers where theoretical physicists study the kinds of theories they use in more generality and develop new calculation methods, with insights that over time filter into “ph” work.

“String”, on the other hand, is not an arXiv category. When string theorists write papers, they’ll put them into “th” or “ph” or another relevant category (for example “gr-qc”, for general relativity and quantum cosmology). This means that when Poppitz distinguishes “ph/th/cosm” from “string”, he’s being subjective, using his own judgement to decide who counts as a string theorist.

So who counts as a string theorist? The simplest thing to do would be to check if their work uses strings. Failing that, they could use other tools of string theory and its close relatives, like Calabi-Yau manifolds, M-branes, and holography.

That might be what Poppitz was doing, but if he was, he was probably missing a lot of the people critics of string theory complain about. He even misses many people who describe themselves as string theorists. In an old post of mine I go through the talks at Strings, string theory’s big yearly conference, giving them finer-grained categories. The majority don’t use anything uniquely stringy.

Instead, I think critics of string theory have two kinds of things in mind.

First, most of the people who made their reputations on string theory are still in academia, and still widely respected. Some of them still work on string theory topics, but many now work on other things. Because they’re still widely respected, their interests have a substantial influence on the field. When one of them starts looking at connections between theories of two-dimensional materials, you get a whole afternoon of talks at Strings about theories of two-dimensional materials. Working on those topics probably makes it a bit easier to get a job, but also, many of the people working on them are students of these highly respected people, who just because of that have an easier time getting a job. If you’re a critic of string theory who thinks the founders of the field led physics astray, then you probably think they’re still leading physics astray even if they aren’t currently working on string theory.

Second, for many other people in physics, string theorists are their colleagues and friends. They’ll make fun of trends that seem overhyped and under-thought, like research on the black hole information paradox or the swampland, or hopes that a slightly tweaked version of supersymmetry will show up soon at the LHC. But they’ll happily use ideas developed in string theory when they prove handy, using supersymmetric theories to test new calculation techniques, string theory’s extra dimensions to inspire and ground new ideas for dark matter, or the math of strings themselves as interesting shortcuts to particle physics calculations. String theory is available as reference to these people in a way that other quantum gravity proposals aren’t. That’s partly due to familiarity and shared language (I remember a talk at Perimeter where string theorists wanted to learn from practitioners from another area and the discussion got bogged down by how they were using the word “dimension”), but partly due to skepticism of the various alternate approaches. Most people have some idea in their heads of deep problems with various proposals: screwing up relativity, making nonsense out of quantum mechanics, or over-interpreting on limited evidence. The most commonly believed criticisms are usually wrong, with objections long-known to practitioners of the alternate approaches, and so those people tend to think they’re being treated unfairly. But the wrong criticisms are often simplified versions of correct criticisms, passed down by the few people who dig deeply into these topics, criticisms that the alternative approaches don’t have good answers to.

The end result is that while string theory itself isn’t dominant, a sort of “string friendliness” is. Most of the jobs aren’t going to string theorists in the literal sense. But the academic world string theorists created keeps turning. People still respect string theorists and the research directions they find interesting, and people are still happy to collaborate and discuss with string theorists. For research communities people are more skeptical of, it must feel very isolating, like the world is still being run by their opponents. But this isn’t the kind of hegemony that can be solved by a revolution. Thinking that string theory is a failed research program, and people focused on it should have a harder time getting hired, is one thing. Thinking that everyone who respects at least one former string theorist should have a harder time getting hired is a very different goal. And if what you’re complaining about is “string friendliness”, not actual string theorists, then that’s what you’re asking for.

The Nowhere String

Space and time seem as fundamental as anything can get. Philosophers like Immanuel Kant thought that they were inescapable, that we could not conceive of the world without space and time. But increasingly, physicists suspect that space and time are not as fundamental as they appear. When they try to construct a theory of quantum gravity, physicists find puzzles, paradoxes that suggest that space and time may just be approximations to a more fundamental underlying reality.

One piece of evidence that quantum gravity researchers point to are dualities. These are pairs of theories that seem to describe different situations, including with different numbers of dimensions, but that are secretly indistinguishable, connected by a “dictionary” that lets you interpret any observation in one world in terms of an equivalent observation in the other world. By itself, duality doesn’t mean that space and time aren’t fundamental: as I explained in a blog post a few years ago, it could still be that one “side” of the duality is a true description of space and time, and the other is just a mathematical illusion. To show definitively that space and time are not fundamental, you would want to find a situation where they “break down”, where you can go from a theory that has space and time to a theory that doesn’t. Ideally, you’d want a physical means of going between them: some kind of quantum field that, as it shifts, changes the world between space-time and not space-time.

What I didn’t know when I wrote that post was that physicists already knew about such a situation in 1993.

Back when I was in pre-school, famous string theorist Edward Witten was trying to understand something that others had described as a duality, and realized there was something more going on.

In string theory, particles are described by lengths of vibrating string. In practice, string theorists like to think about what it’s like to live on the string itself, seeing it vibrate. In that world, there are two dimensions, one space dimension back and forth along the string and one time dimension going into the future. To describe the vibrations of the string in that world, string theorists use the same kind of theory that people use to describe physics in our world: a quantum field theory. In string theory, you have a two-dimensional quantum field theory stuck “inside” a theory with more dimensions describing our world. You see that this world exists by seeing the kinds of vibrations your two-dimensional world can have, through a type of quantum field called a scalar field. With ten scalar fields, ten different ways you can push energy into your stringy world, you can infer that the world around you is a space-time with ten dimensions.

String theory has “extra” dimensions beyond the three of space and one of time we’re used to, and these extra dimensions can be curled up in various ways to hide them from view, often using a type of shape called a Calabi-Yau manifold. In the late 80’s and early 90’s, string theorists had found a similarity between the two-dimensional quantum field theories you get folding string theory around some of these Calabi-Yau manifolds and another type of two-dimensional quantum field theory related to theories used to describe superconductors. People called the two types of theories dual, but Witten figured out there was something more going on.

Witten described the two types of theories in the same framework, and showed that they weren’t two equivalent descriptions of the same world. Rather, they were two different ways one theory could behave.

The two behaviors were connected by something physical: the value of a quantum field called a modulus field. This field can be described by a number, and that number can be positive or negative.

When the modulus field is a large positive number, then the theory behaves like string theory twisted around a Calabi-Yau manifold. In particular, the scalar fields have many different values they can take, values that are smoothly related to each other. These values are nothing more or less than the position of the string in space and time. Because the scalars can take many values, the string can sit in many different places, and because the values are smoothly related to each other, the string can smoothly move from one place to another.

When the modulus field is a large negative number, then the theory is very different. What people thought of as the other side of the duality, a theory like the theories used to describe superconductors, is the theory that describes what happens when the modulus field is large and negative. In this theory, the scalars can no longer take many values. Instead, they have one option, one stable solution. That means that instead of there being many different places the string could sit, describing space, there are no different places, and thus no space. The string lives nowhere.

These are two very different situations, one with space and one without. And they’re connected by something physical. You could imagine manipulating the modulus field, using other fields to funnel energy into it, pushing it back and forth from a world with space to a world of nowhere. Much more than the examples I was aware of, this is a super-clear example of a model where space is not fundamental, but where it can be manipulated, existing or not existing based on physical changes.

We don’t know whether a model like this describes the real world. But it’s gratifying to know that it can be written down, that there is a picture, in full mathematical detail, of how this kind of thing works. Hopefully, it makes the idea that space and time are not fundamental sound a bit more reasonable.

I Ain’t Afraid of No-Ghost Theorems

In honor of Halloween this week, let me say a bit about the spookiest term in physics: ghosts.

In particle physics, we talk about the universe in terms of quantum fields. There is an electron field for electrons, a gluon field for gluons, a Higgs field for Higgs bosons. The simplest fields, for the simplest particles, can be described in terms of just a single number at each point in space and time, a value describing how strong the field is. More complicated fields require more numbers.

Most of the fundamental forces have what we call vector fields. They’re called this because they are often described with vectors, lists of numbers that identify a direction in space and time. But these vectors actually contain too many numbers.

These extra numbers have to be tidied up in some way in order to describe vector fields in the real world, like the electromagnetic field or the gluon field of the strong nuclear force. There are a number of tricks, but the nicest is usually to add some extra particles called ghosts. Ghosts are designed to cancel out the extra numbers in a vector, leaving the right description for a vector field. They’re set up mathematically such that they can never be observed, they’re just a mathematical trick.

Mathematical tricks aren’t all that spooky (unless you’re scared of mathematics itself, anyway). But in physics, ghosts can take on a spookier role as well.

In order to do their job cancelling those numbers, ghosts need to function as a kind of opposite to a normal particle, a sort of undead particle. Normal particles have kinetic energy: as they go faster and faster, they have more and more energy. Said another way, it takes more and more energy to make them go faster. Ghosts have negative kinetic energy: the faster they go, the less energy they have.

If ghosts are just a mathematical trick, that’s fine, they’ll do their job and cancel out what they’re supposed to. But sometimes, physicists accidentally write down a theory where the ghosts aren’t just a trick cancelling something out, but real particles you could detect, without anything to hide them away.

In a theory where ghosts really exist, the universe stops making sense. The universe defaults to the lowest energy it can reach. If making a ghost particle go faster reduces its energy, then the universe will make ghost particles go faster and faster, and make more and more ghost particles, until everything is jam-packed with super-speedy ghosts unto infinity, never-ending because it’s always possible to reduce the energy by adding more ghosts.

The absence of ghosts, then, is a requirement for a sensible theory. People prove theorems showing that their new ideas don’t create ghosts. And if your theory does start seeing ghosts…well, that’s the spookiest omen of all: an omen that your theory is wrong.

Why Quantum Gravity Is Controversial

Merging quantum mechanics and gravity is a famously hard physics problem. Explaining why merging quantum mechanics and gravity is hard is, in turn, a very hard science communication problem. The more popular descriptions tend to lead to misunderstandings, and I’ve posted many times over the years to chip away at those misunderstandings.

Merging quantum mechanics and gravity is hard…but despite that, there are proposed solutions. String Theory is supposed to be a theory of quantum gravity. Loop Quantum Gravity is supposed to be a theory of quantum gravity. Asymptotic Safety is supposed to be a theory of quantum gravity.

One of the great virtues of science and math is that we are, eventually, supposed to agree. Philosophers and theologians might argue to the end of time, but in math we can write down a proof, and in science we can do an experiment. If we don’t yet have the proof or the experiment, then we should reserve judgement. Either way, there’s no reason to get into an unproductive argument.

Despite that, string theorists and loop quantum gravity theorists and asymptotic safety theorists, famously, like to argue! There have been bitter, vicious, public arguments about the merits of these different theories, and decades of research doesn’t seem to have resolved them. To an outside observer, this makes quantum gravity seem much more like philosophy or theology than like science or math.

Why is there still controversy in quantum gravity? We can’t do quantum gravity experiments, sure, but if that were the problem physicists could just write down the possibilities and leave it at that. Why argue?

Some of the arguments are for silly aesthetic reasons, or motivated by academic politics. Some are arguments about which approaches are likely to succeed in future, which as always is something we can’t actually reliably judge. But the more justified arguments, the strongest and most durable ones, are about a technical challenge. They’re about something called non-perturbative physics.

Most of the time, when physicists use a theory, they’re working with an approximation. Instead of the full theory, they’re making an assumption that makes the theory easier to use. For example, if you assume that the velocity of an object is small, you can use Newtonian physics instead of special relativity. Often, physicists can systematically relax these assumptions, including more and more of the behavior of the full theory and getting a better and better approximation to the truth. This process is called perturbation theory.

Other times, this doesn’t work well. The full theory has some trait that isn’t captured by the approximations, something that hides away from these systematic tools. The theory has some important aspect that is non-perturbative.

Every proposed quantum gravity theory uses approximations like this. The theory’s proponents try to avoid these approximations when they can, but often they have to approximate and hope they don’t miss too much. The opponents, in turn, argue that the theory’s proponents are missing something important, some non-perturbative fact that would doom the theory altogether.

Asymptotic Safety is built on top of an approximation, one different from what other quantum gravity theorists typically use. To its proponents, work using their approximation suggests that gravity works without any special modifications, that the theory of quantum gravity is easier to find than it seems. Its opponents aren’t convinced, and think that the approximation is missing something important which shows that gravity needs to be modified.

In Loop Quantum Gravity, the critics think their approximation misses space-time itself. Proponents of Loop Quantum Gravity have been unable to prove that their theory, if you take all the non-perturbative corrections into account, doesn’t just roll up all of space and time into a tiny spiky ball. They expect that their theory should allow for a smooth space-time like we experience, but the critics aren’t convinced, and without being able to calculate the non-perturbative physics neither side can convince the other.

String Theory was founded and originally motivated by perturbative approximations. Later, String Theorists figured out how to calculate some things non-perturbatively, often using other simplifications like supersymmetry. But core questions, like whether or not the theory allows a positive cosmological constant, seem to depend on non-perturbative calculations that the theory gives no instructions for how to do. Some critics don’t think there is a consistent non-perturbative theory at all, that the approximations String Theorists use don’t actually approximate to anything. Even within String Theory, there are worries that the theory might try to resist approximation in odd ways, becoming more complicated whenever a parameter is small enough that you could use it to approximate something.

All of this would be less of a problem with real-world evidence. Many fields of science are happy to use approximations that aren’t completely rigorous, as long as those approximations have a good track record in the real world. In general though, we don’t expect evidence relevant to quantum gravity any time soon. Maybe we’ll get lucky, and studies of cosmology will reveal something, or an experiment on Earth will have a particularly strange result. But nature has no obligation to help us out.

Without evidence, though, we can still make mathematical progress. You could imagine someone proving that the various perturbative approaches to String Theory become inconsistent when stitched together into a full non-perturbative theory. Alternatively, you could imagine someone proving that a theory like String Theory is unique, that no other theory can do some key thing that it does. Either of these seems unlikely to come any time soon, and most researchers in these fields aren’t pursuing questions like that. But the fact the debate could be resolved means that it isn’t just about philosophy or theology. There’s a real scientific, mathematical controversy, one rooted in our inability to understand these theories beyond the perturbative methods their proponents use. And while I don’t expect it to be resolved any time soon, one can always hold out hope for a surprise.

Amplitudes 2024, Continued

I’ve now had time to look over the rest of the slides from the Amplitudes 2024 conference, so I can say something about Thursday and Friday’s talks.

Thursday was gravity-focused. Zvi Bern’s review talk was actually a review, a tour of the state of the art in using amplitudes techniques to make predictions for gravitational wave physics. Bern emphasized that future experiments will require much more precision: two more orders of magnitude, which in our lingo amounts to two more “loops”. The current state of the art is three loops, but they’ve been hacking away at four, doing things piece by piece in a way that cleverly also yields publications (for example, they can do just the integrals needed for supergravity, which are simpler). Four loops here is the first time that the Feynman diagrams involve Calabi-Yau manifolds, so they will likely need techniques from some of the folks I talked about last week. Once they have four loops, they’ll want to go to five, since that is the level of precision you need to learn something about the material in neutron stars. The talk covered a variety of other developments, some of which were talked about later on Thursday and some of which were only mentioned here.

Of that day’s other speakers, Stefano De Angelis, Lucile Cangemi, Mikhail Ivanov, and Alessandra Buonanno also focused on gravitational waves. De Angelis talked about the subtleties that show up when you try to calculate gravitational waveforms directly with amplitudes methods, showcasing various improvements to the pipeline there. Cangemi talked about a recurring question with its own list of subtleties, namely how the Kerr metric for spinning black holes emerges from the math of amplitudes of spinning particles. Gravitational waves were the focus of only the second half of Ivanov’s talk, where he talked about how amplitudes methods can clear up some of the subtler effects people try to take into account. The first half was about another gravitational application, that of using amplitudes methods to compute the correlations of galaxy structures in the sky, a field where it looks like a lot of progress can be made. Finally, Buonanno gave the kind of talk she’s given a few times at these conferences, a talk that puts these methods in context, explaining how amplitudes results are packaged with other types of calculations into the Effective-One-Body framework which then is more directly used at LIGO. This year’s talk went into more detail about what the predictions are actually used for, which I appreciated. I hadn’t realized that there have been a handful of black hole collisions discovered by other groups from LIGO’s data, a win for open science! Her slides had a nice diagram explaining what data from the gravitational wave is used to infer what black hole properties, quite a bit more organized than the statistical template-matching I was imagining. She explained the logic behind Bern’s statement that gravitational wave telescopes will need two more orders of magnitude, pointing out that that kind of precision is necessary to be sure that something that might appear to be a deviation from Einstein’s theory of gravity is not actually a subtle effect of known physics. Her method typically is adjusted to fit numerical simulations, but she shows that even without that adjustment they now fit the numerics quite well, thanks in part to contributions from amplitudes calculations.

Of the other talks that day, David Kosower’s was the only one that didn’t explicitly involve gravity. Instead, his talk focused on a more general question, namely how to find a well-defined basis of integrals for Feynman diagrams, which turns out to involve some rather subtle mathematics and geometry. This is a topic that my former boss Jake Bourjaily worked on in a different context for some time, and I’m curious whether there is any connection between the two approaches. Oliver Schlotterer gave the day’s second review talk, once again of the “actually a review” kind, covering a variety of recent developments in string theory amplitudes. These include some new pictures of how string theory amplitudes that correspond to Yang-Mills theories “square” to amplitudes involving gravity at higher loops and progress towards going past two loops, the current state of the art for most string amplitude calculations. (For the experts: this does not involve taking the final integral over the moduli space, which is still a big unsolved problem.) He also talked about progress by Sebastian Mizera and collaborators in understanding how the integrals that show up in string theory make sense in the complex plane. This is a problem that people had mostly managed to avoid dealing with because of certain simplifications in the calculations people typically did (no moduli space integration, expansion in the string length), but taking things seriously means confronting it, and Mizera and collaborators found a novel solution to the problem that has already passed a lot of checks. Finally, Tobias Hansen’s talk also related to string theory, specifically in anti-de-Sitter space, where the duality between string theory and N=4 super Yang-Mills lets him and his collaborators do Yang-Mills calculations and see markedly stringy-looking behavior.

Friday began with Kevin Costello, whose not-really-a-review talk dealt with his work with Natalie Paquette showing that one can use an exactly-solvable system to learn something about QCD. This only works for certain rather specific combinations of particles: for example, in order to have three colors of quarks, they need to do the calculation for nine flavors. Still, they managed to do a calculation with this method that had not previously been done with more traditional means, and to me it’s impressive that anything like this works for a theory without supersymmetry. Mina Himwich and Diksha Jain both had talks related to a topic of current interest, “celestial” conformal field theory, a picture that tries to apply ideas from holography in which a theory on the boundary of a space fully describes the interior, to the “boundary” of flat space, infinitely far away. Himwich talked about a symmetry observed in that research program, and how that symmetry can be seen using more normal methods, which also lead to some suggestions of how the idea might be generalized. Jain likewise covered a different approach, one in which one sets artificial boundaries in flat space and sees what happens when those boundaries move.

Yifei He described progress in the modern S-matrix bootstrap approach. Previously, this approach had gotten quite general constraints on amplitudes. She tries to do something more specific, and predict the S-matrix for scattering of pions in the real world. By imposing compatibility with knowledge from low energies and high energies, she was able to find a much more restricted space of consistent S-matrices, and these turn out to actually match pretty well to experimental results. Mathieu Giroux addresses an important question for a variety of parts of amplitudes research, how to predict the singularities of Feynman diagrams. He explored a recursive approach to solving Landau’s equations for these singularities, one which seems impressively powerful, in one case being able to find a solution that in text form is approximately the length of Harry Potter. Finally, Juan Maldacena closed the conference by talking about some progress he’s made towards an old idea, that of defining M theory in terms of a theory involving actual matrices. This is a very challenging thing to do, but he is at least able to tackle the simplest possible case, involving correlations between three observations. This had a known answer, so his work serves mostly as a confirmation that the original idea makes sense at at least this level.