Tag Archives: string theory

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!

Things I’d Like to Know More About

This is an accountability post, of sorts.

As a kid, I wanted to know everything. Eventually, I realized this was a little unrealistic. Doomed to know some things and not others, I picked physics as a kind of triage. Other fields I could learn as an outsider: not well enough to compete with the experts, but enough to at least appreciate what they were doing. After watching a few string theory documentaries, I realized this wasn’t the case for physics: if I was going to ever understand what those string theorists were up to, I would have to go to grad school in string theory.

Over time, this goal lost focus. I’ve become a very specialized creature, an “amplitudeologist”. I didn’t have time or energy for my old questions. In an irony that will surprise no-one, a career as a physicist doesn’t leave much time for curiosity about physics.

One of the great things about this blog is how you guys remind me of those old questions, bringing me out of my overspecialized comfort zone. In that spirit, in this post I’m going to list a few things in physics that I really want to understand better. The idea is to make a public commitment: within a year, I want to understand one of these topics at least well enough to write a decent blog post on it.

Wilsonian Quantum Field Theory:

When you first learn quantum field theory as a physicist, you learn how unsightly infinite results get covered up via an ad-hoc-looking process called renormalization. Eventually you learn a more modern perspective, that these infinite results show up because we’re ignorant of the complete theory at high energies. You learn that you can think of theories at a particular scale, and characterize them by what happens when you “zoom” in and out, in an approach codified by the physicist Kenneth Wilson.

While I understand the basics of Wilson’s approach, the courses I took in grad school skipped the deeper implications. This includes the idea of theories that are defined at all energies, “flowing” from an otherwise scale-invariant theory perturbed with extra pieces. Other physicists are much more comfortable thinking in these terms, and the topic is important for quite a few deep questions, including what it means to properly define a theory and where laws of nature “live”. If I’m going to have an informed opinion on any of those topics, I’ll need to go back and learn the Wilsonian approach properly.

Wormholes:

If you’re a fan of science fiction, you probably know that wormholes are the most realistic option for faster-than-light travel, something that is at least allowed by the equations of general relativity. “Most realistic” isn’t the same as “realistic”, though. Opening a wormhole and keeping it stable requires some kind of “exotic matter”, and that matter needs to violate a set of restrictions, called “energy conditions”, that normal matter obeys. Some of these energy conditions are just conjectures, some we even know how to violate, while others are proven to hold for certain types of theories. Some energy conditions don’t rule out wormholes, but instead restrict their usefulness: you can have non-traversable wormholes (basically, two inescapable black holes that happen to meet in the middle), or traversable wormholes where the distance through the wormhole is always longer than the distance outside.

I’ve seen a few talks on this topic, but I’m still confused about the big picture: which conditions have been proven, what assumptions were needed, and what do they all imply? I haven’t found a publicly-accessible account that covers everything. I owe it to myself as a kid, not to mention everyone who’s a kid now, to get a satisfactory answer.

Quantum Foundations:

Quantum Foundations is a field that many physicists think is a waste of time. It deals with the questions that troubled Einstein and Bohr, questions about what quantum mechanics really means, or why the rules of quantum mechanics are the way they are. These tend to be quite philosophical questions, where it’s hard to tell if people are making progress or just arguing in circles.

I’m more optimistic about philosophy than most physicists, at least when it’s pursued with enough analytic rigor. I’d like to at least understand the leading arguments for different interpretations, what the constraints on interpretations are and the main loopholes. That way, if I end up concluding the field is a waste of time at least I’d be making an informed decision.

Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

Hadronic Strings and Large-N Field Theory at NBI

One of string theory’s early pioneers, Michael Green, is currently visiting the Niels Bohr Institute as part of a program by the Simons Foundation. The program includes a series of conferences. This week we are having the first such conference, on Hadronic Strings and Large-N Field Theory.

The bulk of the conference focused on new progress on an old subject, using string theory to model the behavior of quarks and gluons. There were a variety of approaches on offer, some focused on particular approximations and others attempting to construct broader, “phenomenological” models.

The other talks came from a variety of subjects, loosely tied together by the topic of “large N field theories”. “N” here is the number of colors: while the real world has three “colors” of quarks, you can imagine a world with more. This leads to simpler calculations, and often to connections with string theory. Some talks deal with attempts to “solve” certain large-N theories exactly. Others ranged farther afield, even to discussions of colliding black holes.

Assumptions for Naturalness

Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.

Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.

(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)

You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.

If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.

Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.

I’d like to state the naturalness argument as follows:

  1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
  2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
  3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
  4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.

Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.

(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)

Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.

Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.

Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.

Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.

The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.

In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.

This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.

One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.

There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?

Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.

If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.

If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.

How to Get a “Minimum Scale” Without Pixels

Zoom in, and the world gets stranger. Down past atoms, past protons and neutrons, far past the smallest scales we can probe at the Large Hadron Collider, we get to the scale at which quantum gravity matters: the Planck scale.

Weird things happen at the Planck scale. Space and time stop making sense. Read certain pop science articles, and they’ll tell you the Planck scale is the smallest scale, the scale where space and time are quantized, the “pixels of the universe”.

That last sentence, by the way, is not actually how the Planck scale works. In fact, there’s pretty good evidence that the universe doesn’t have “pixels”, that space and time are not quantized in that way. Even very tiny pixels would change the speed of light, making it different for different colors. Tiny effects like that add up, and astronomers would almost certainly have noticed an effect from even Planck-scale pixels. Unless your idea of “pixels” is fairly unusual, it’s already been ruled out.

If the Planck scale isn’t the scale of the “pixels of the universe”, why do people keep saying it is?

Part of the problem is that the real story is vaguer. We don’t know what happens at the Planck scale. It’s not just that we don’t know which theory of quantum gravity is right: we don’t even know what different quantum gravity proposals predict. People are trying to figure it out, and there are some more or less viable ideas, but ultimately all we know is that at the Planck scale our description of space-time should break down.

“Our description breaks down” is unfortunately not very catchy. Certainly, it’s less catchy than “pixels of the universe”. Part of the problem is that most people don’t know what “our description breaks down” actually means.

So if that’s the part that’s puzzling you, maybe an example would help. This won’t be the full answer, though it could be part of the story. What it will be is an example of what “our description breaks down” can actually mean, how there can be a scale beyond which space-time stops making sense without there being “pixels”.

The example comes from string theory, from a concept called “T duality”. In string theory, “extra” dimensions beyond our usual three space and one time are curled up small, so that traveling along them just gets you back where you started. Instead of particles, there are strings, with length close to the Planck length.

Picture a loop of string in a small extra dimension. What can it do?

Image credit: someone who’s done a lot more work explaining string theory than I have

One thing it can do is move along the extra dimension. Since it has to end up back where it started, it can’t just move at any speed it wants. It turns out that the smaller the extra dimension, the more energy the string has when it spins around it.

The other thing it can do is wrap around the extra dimension. If it wraps around, the string has more energy if the dimension is larger, like a rubber band stretched around a pipe.

The string can do either or both of these multiple times. It can wrap many times around the extra dimension, or move in a quicker circle around it, or both at once. And if you calculate the energy of these combinations, you notice something: a string wound around a big circle has the same energy as a string moving around a small circle. In particular, you get the same energy on a circle of radius R, and a circle of radius l^2/R, where l is the length of the string.

It turns out it’s not just the energy that’s the same: for everything that happens on a circle of radius R, there’s a matching description with a circle of radius l^2/R, with wrapping and moving swapped. We say that the two descriptions are dual: two seemingly different pictures that turn out to be completely physically indistinguishable.

Since the two pictures are indistinguishable, it doesn’t actually make sense to talk about dimensions smaller than the length of the string. It’s not that they can’t exist, or that they’re smaller than the “pixels of the universe”: it’s just that any description you write down of such a small dimension could just as easily have been of a larger, dual dimension. It’s that your picture, of one obvious size of the curled up dimension, broke down and stopped making sense.

As I mentioned, this isn’t the whole picture of what happens at the Planck scale, even in string theory. It is an example of a broader idea that string theorists are investigating, that in order to understand space-time at the smallest scales you need to understand many different dual descriptions. And hopefully, it’s something you can hold in your mind, a specific example of what “our description breaks down” can actually mean in practice, without pixels.