Tag Archives: gravity

The Rippling Pond Universe

[Background: Someone told me they couldn’t imagine popularizing Quantum Field Theory in the same flashy way people popularize String Theory. Naturally I took this as a challenge. Please don’t take any statements about what “really exists” here too seriously, this isn’t intended as metaphysics, just metaphor.]


You probably learned about atoms in school.

Your teacher would have explained that these aren’t the same atoms the ancient Greeks imagined. Democritus thought of atoms as indivisible, unchanging spheres, the fundamental constituents of matter. We know, though, that atoms aren’t indivisible. They’re clouds of electrons, buzzing in their orbits around a nucleus of protons and neutrons. Chemists can divide the electrons from the rest, nuclear physicists can break the nucleus. The atom is not indivisible.

And perhaps your teacher remarked on how amazing it is, that the nucleus is such a tiny part of the atom, that the atom, and thus all solid matter, is mostly empty space.


You might have learned that protons and neutrons, too, are not indivisible. That each proton, and each neutron, is composed of three particles called quarks, particles which can be briefly freed by powerful particle colliders.

And you might have wondered, then, even if you didn’t think to ask: are quarks atoms? The real atoms, the Greek atoms, solid indestructible balls of fundamental matter?


They aren’t, by the way.


You might have gotten an inkling of this, learning about beta decay. In beta decay, a neutron transforms, becoming a proton, an electron, and a neutrino. Look for an electron inside a neutron, and you won’t find one. Even if you look at the quarks, you see the same transformation: a down quark becomes an up quark, plus an electron, plus a neutrino. If quarks were atoms, indivisible and unchanging, this couldn’t happen. There’s nowhere for the electron to hide.


In fact, there are no atoms, not the way the Greeks imagined. Just ripples.

Water Drop

Picture the universe as a pond. This isn’t a still pond: something has disturbed it, setting ripples and whirlpools in motion. These ripples and whirlpools skim along the surface of the pond, eddying together and scattering apart.

Our universe is not a simple pond, and so these are not simple ripples. They shine and shimmer, each with their own bright hue, colors beyond our ordinary experience that mix in unfamiliar ways. The different-colored ripples interact, merge and split, and the pond glows with their light.

Stand back far enough, and you notice patterns. See that red ripple, that stays together and keeps its shape, that meets other ripples and interacts in predictable ways. You might imagine the red ripple is an atom, truly indivisible…until it splits, transforms, into ripples of new colors. The quark has changed, down to up, an electron and a neutrino rippling away.

All of our world is encoded in the colors of these ripples, each kind of charge its own kind of hue. With a wink (like your teacher’s, telling you of empty atoms), I can tell you that distance itself is just a kind of ripple, one that links other ripples together. The pond’s very nature as a place is defined by the ripples on it.


This is Quantum Field Theory, the universe of ripples. Democritus said that in truth there are only atoms and the void, but he was wrong. There are no atoms. There is only the void. It ripples and shimmers, and each of us lives as a collection of whirlpools, skimming the surface, seeming concrete and real and vital…until the ripples dissolve, and a new pattern comes.

At the GGI Lectures on the Theory of Fundamental Interactions

I’m at the Galileo Galilei Institute for Theoretical Physics in Florence at their winter school, the GGI Lectures on the Theory of Fundamental Interactions. Next week I’ll be helping Lance Dixon teach Amplitudeology, this week, I’m catching the tail end of Ira Rothstein’s lectures.


The Galileo Galilei Institute, at the end of a long, winding road filled with small, speedy cars and motorcycles, in classic Italian fashion

Rothstein has been heavily involved in doing gravitational wave calculations using tools from quantum field theory, something that has recently captured a lot of interest from amplitudes people. Specifically, he uses Effective Field Theory, theories that are “effectively” true at some scale but hide away higher-energy physics. In the case of gravitational waves, these theories are a powerful way to calculate the waves that LIGO and VIRGO can observe without using the full machinery of general relativity.

After seeing Rothstein’s lectures, I’m reminded of something he pointed out at the QCD Meets Gravity conference in December. He emphasized then that even if amplitudes people get very good at drawing diagrams for classical general relativity, that won’t be the whole story: there’s a series of corrections needed to “match” between the quantities LIGO is able to see and the ones we’re able to calculate. Different methods incorporate these corrections in different ways, and the most intuitive approach for us amplitudes folks may still end up cumbersome once all the corrections are included. In typical amplitudes fashion, this just makes me wonder if there’s a shortcut: some way to compute, not just a piece that gets plugged in to an Effective Field Theory story, but the waves LIGO sees in one fell swoop (or at least, the part where gravity is weak enough that our methods are still useful). That’s probably a bit naive of me, though.

Epistemology, Not Metaphysics, Justifies Experiments

While I was visiting the IAS a few weeks back, they had a workshop on Quantum Information and Black Holes. I didn’t see many of the talks, but I did get to see Leonard Susskind talk about his new slogan, GR=QM.

For some time now, researchers have been uncovering deep connections between gravity and quantum mechanics. Juan Maldacena jump-started the field with the discovery of AdS/CFT, showing that theories that describe gravity in a particular curved space (Anti-de Sitter, or AdS) are equivalent to non-gravity quantum theories describing the boundary of that space (specifically, Conformal Field Theories, or CFTs). The two theories contain the same information and, with the right “dictionary”, describe the same physics: in our field’s vernacular, they’re dual. Since then, physicists have found broader similarities, situations where properties of quantum mechanics, like entanglement, are closely linked to properties of gravity theories. Maldacena’s ER=EPR may be the most publicized of these, a conjectured equivalence between Einstein-Rosen bridges (colloquially known as wormholes) and entangled pairs of particles (famously characterized by Einstein, Podolsky, and Rosen).

GR=QM is clearly a riff on ER=EPR, but Susskind is making a more radical claim. Based on these developments, including his own work on quantum complexity, Susskind is arguing that the right kind of quantum mechanical system automatically gives rise to quantum gravity. What’s more, he claims that these systems will be available, using quantum computers, within roughly a decade. Within ten years or so, we’ll be able to do quantum gravity experiments.

That sounds ridiculous, until you realize he’s talking about dual theories. What he’s imagining is not an experiment at the absurdly high energies necessary to test quantum gravity, but rather a low-energy quantum mechanics experiment that is equivalent, by something like AdS/CFT, to a quantum gravity experiment.

Most people would think of that as a simulation, not an actual test of quantum gravity. Susskind, though, spends quite a bit of time defending the claim that it really is gravity, that literally GR=QM. His description of clever experiments and overarching physical principles is aimed at piling on evidence for that particular claim.

What do I think? I don’t think it matters much.

The claim Susskind is making is one of metaphysics: the philosophy of which things do and do not “really” exist. Unlike many physicists, I think metaphysics is worth discussing, that there are philosophers who make real progress with it.

But ultimately, Susskind is proposing a set of experiments. And what justifies experiments isn’t metaphysics, it’s epistemology: not what’s “really there”, but what we can learn.

What can we learn from the sorts of experiments Susskind is proposing?

Let’s get this out of the way first: we can’t learn which theory describes quantum gravity in our own world.

That’s because every one of these experiments relies on setting up a quantum system with particular properties. Every time, you’re choosing the “boundary theory”, the quantum mechanical side of GR=QM. Either you choose a theory with a known gravity partner, and you know how the inside should behave, or you choose a theory with an unknown partner. Either way, you have no reason to expect the gravity side to resemble the world we live in.

Plenty of people would get suspicious of Susskind here, and accuse him of trying to mislead people. They’re imagining headlines, “Experiment Proves String Theory”, based on a system intentionally set up to have a string theory dual, a system that can’t actually tell us whether string theory describes the real world.

That’s not where I’m going with this.

The experiments that Susskind is describing can’t prove string theory. But we could still learn something from them.

For one, we could learn whether these pairs of theories really are equivalent. AdS/CFT, ER=EPR, these are conjectures. In some cases, they’re conjectures with very good evidence. But they haven’t been proven, so it’s still possible there’s a problem people overlooked. One of the nice things about experiments and simulations is that they’re very good at exposing problems that were overlooked.

For another, we could get a better idea of how gravity behaves in general. By simulating a wide range of theories, we could look for overarching traits, properties that are common to most gravitational theories. We wouldn’t be sure that those properties hold in our world…but with enough examples, we could get pretty confident. Hopefully, we’d stumble on things that gravity has to do, in order to be gravity.

Susskind is quite capable of making these kinds of arguments, vastly more so than I. So it frustrates me that every time I’ve seen him talk or write about this, he hasn’t. Instead, he keeps framing things in terms of metaphysics, whether quantum mechanics “really is” gravity, whether the experiment “really” explores a wormhole. If he wants to usher in a new age of quantum gravity experiments, not just as a buzzword but as real, useful research, then eventually he’s going to have to stop harping on metaphysics and start talking epistemology. I look forward to when that happens.

4gravitons Meets QCD Meets Gravity

I’m at UCLA this week, for the workshop QCD Meets Gravity. I haven’t worked on QCD or gravity yet, so I’m mostly here as an interested observer, and as an excuse to enjoy Los Angeles in December.


I think there’s a song about this…

QCD Meets Gravity is a conference centered around the various ways that “gravity is Yang-Mills squared”. There are a number of tricks that let you “square” calculations in Yang-Mills theories (a type of theory that includes QCD) to get calculations in gravity, and this conference showcased most of them.

At Amplitudes this summer, I was disappointed there were so few surprises. QCD Meets Gravity was different, with several talks on new or preliminary results, including one by Julio Parra-Martinez where the paper went up in the last few minutes of the talk! Yu-tin Huang talked about his (still-unpublished) work with Nima Arkani-Hamed on “UV/IR Polytopes”. The story there is a bit like the conformal bootstrap, with constraints (in this case based on positivity) marking off a space of “allowed” theories. String theory, interestingly, is quite close to the boundary of what is allowed. Enrico Herrmann is working on a way to figure out which gravity integrands are going to diverge without actually integrating them, while Simon Caron-Huot, in his characteristic out-of-the-box style, is wondering whether supersymmetric black holes precess. We also heard a bit more about a few recent papers. Oliver Schlotterer’s talk cleared up one thing: apparently the GEF functions he defines in his paper on one-loop “Z theory” are pronounced “Jeff”. I kept waiting for him to announce “Jeff theory”, but unfortunately no such luck. Sebastian Mizera’s talk was a very clear explanation of intersection theory, the subject of his recent paper. As it turns out, intersection theory is the study of mathematical objects like the Beta function (which shows up extensively in string theory), taking them apart in a way very reminiscent of the “squaring” story of Yang-Mills and gravity.

The heart of the workshop this year was gravitational waves. Since LIGO started running, amplitudes researchers (including, briefly, me) have been looking for ways to get involved. This conference’s goal was to bring together amplitudes people and the gravitational wave community, to get a clearer idea of what we can contribute. Between talks and discussions, I feel like we all understand the problem better. Some things that the amplitudes community thought were required, like breaking the symmetries of special relativity, turn out to be accidents of how the gravitational wave community calculates things: approximations that made things easier for them, but make things harder for us. There are areas in which we can make progress quite soon, even areas in which amplitudes people have already made progress. The detectors for which the new predictions matter might still be in the future (LIGO can measure two or three “loops”, LISA will see up to four), but they will eventually be measured. Amplitudes and gravitational wave physics could turn out to be a very fruitful partnership.


A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.


They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.

Congratulations to Rainer Weiss, Barry Barish, and Kip Thorne!

The Nobel Prize in Physics was announced this week, awarded to Rainer Weiss, Kip Thorne, and Barry Barish for their work on LIGO, the gravitational wave detector.


Many expected the Nobel to go to LIGO last year, but the Nobel committee waited. At the time, it was expected the prize would be awarded to Rainer Weiss, Kip Thorne, and Ronald Drever, the three founders of the LIGO project, but there were advocates for Barry Barish was well. Traditionally, the Nobel is awarded to at most three people, so the argument got fairly heated, with opponents arguing Barish was “just an administrator” and advocates pointing out that he was “just the administrator without whom the project would have been cancelled in the 90’s”.

All of this ended up being irrelevant when Drever died last March. The Nobel isn’t awarded posthumously, so the list of obvious candidates (or at least obvious candidates who worked on LIGO) was down to three, which simplified thing considerably for the committee.

LIGO’s work is impressive and clearly Nobel-worthy, but I would be remiss if I didn’t mention that there is some controversy around it. In June, several of my current colleagues at the Niels Bohr Institute uploaded a paper arguing that if you subtract the gravitational wave signal that LIGO claims to have found then the remaining data, the “noise”, is still correlated between LIGO’s two detectors, which it shouldn’t be if it were actually just noise. LIGO hasn’t released an official response yet, but a LIGO postdoc responded with a guest post on Sean Carroll’s blog, and the team at NBI had responses of their own.

I’d usually be fairly skeptical of this kind of argument: it’s easy for an outsider looking at the data from a big experiment like this to miss important technical details that make the collaboration’s analysis work. That said, having seen some conversations between these folks, I’m a bit more sympathetic. LIGO hadn’t been communicating very clearly initially, and it led to a lot of unnecessary confusion on both sides.

One thing that I don’t think has been emphasized enough is that there are two claims LIGO is making: that they detected gravitational waves, and that they detected gravitational waves from black holes of specific masses at a specific distance. The former claim could be supported by the existence of correlated events between the detectors, without many assumptions as to what the signals should look like. The team at NBI seem to have found a correlation of that sort, but I don’t know if they still think the argument in that paper holds given what they’ve said elsewhere.

The second claim, that the waves were from a collision of black holes with specific masses, requires more work. LIGO compares the signal to various models, or “templates”, of black hole events, trying to find one that matches well. This is what the group at NBI subtracts to get the noise contribution. There’s a lot of potential for error in this sort of template-matching. If two templates are quite similar, it may be that the experiment can’t tell the difference between them. At the same time, the individual template predictions have their own sources of uncertainty, coming from numerical simulations and “loops” in particle physics-style calculations. I haven’t yet found a clear explanation from LIGO of how they take these various sources of error into account. It could well be that even if they definitely saw gravitational waves, they don’t actually have clear evidence for the specific black hole masses they claim to have seen.

I’m sure we’ll hear more about this in the coming months, as both groups continue to talk through their disagreement. Hopefully we’ll get a clearer picture of what’s going on. In the meantime, though, Weiss, Barish, and Thorne have accomplished something impressive regardless, and should enjoy their Nobel.

Visiting Uppsala

I’ve been in Uppsala this week, visiting Henrik Johansson‘s group.


The Ångström Laboratory here is substantially larger than an ångström, a clear example of false advertising.

As such, I haven’t had time to write a long post about the recent announcement by the LIGO and VIRGO collaborations. Luckily, Matt Strassler has written one of his currently all-too-rare posts on the subject, so if you’re curious you should check out what he has to say.

Looking at the map of black hole collisions in that post, I’m struck by how quickly things have improved. The four old detections are broad slashes across the sky, the newest is a small patch. Now that there are enough detectors to triangulate, all detections will be located that precisely, or better. A future map might be dotted with precise locations of black hole collisions, but it would still be marred by those four slashes: relics of the brief time when only two machines in the world could detect gravitational waves.

Textbook Review: Exploring Black Holes

I’m bringing a box of textbooks with me to Denmark. Most of them are for work: a few Quantum Field Theory texts I might use, a Complex Analysis book for when I inevitably forget how to do contour integration.

One of the books, though, is just for fun.


Exploring Black Holes is an introduction to general relativity for undergraduates. The book came out of a collaboration between Edwin F. Taylor, known for his contributions to physics teaching, and John Archibald Wheeler, who among a long list of achievements was responsible for popularizing the term “black hole”. The result is something quite unique: a general relativity course that requires no math more advanced than calculus, and no physics more advanced than special relativity.

It does this by starting, not with the full tensor-riddled glory of Einstein’s equations, but with specialized solutions to those equations, mostly the Schwarzschild solution that describes space around spherical objects (including planets, stars, and black holes). From there, it manages to introduce curved space in a way that is both intuitive and naturally grows out of what students learn about special relativity. It really is the kind of course a student can take right after their first physics course, and indeed as an undergrad that’s exactly what I did.

With just the Schwarzchild solution and its close relatives, you can already answer most of the questions young students have about general relativity. In a series of “projects”, the book explores the corrections GR demands of GPS satellites, the process of falling into a black hole, the famous measurement of the advance of the perihelion of mercury, the behavior of light in a strong gravitational field, and even a bit of cosmology. In the end the students won’t know the full power of the theory, but they’ll get a taste while building valuable physical intuition.

Still, I wouldn’t bring this book with me if it was just an excellent undergraduate textbook. Exploring Black Holes is a great introduction to general relativity, but it also has a hilarious not-so-hidden agenda: inspiring future astronauts to jump into black holes.

“Nowhere could life be simpler or more relaxed than in a free-float frame, such as an unpowered spaceship falling toward a black hole.” – pg. 2-31

The book is full of quotes like this. One of the book’s “projects” involves computing what happens to an astronaut who falls into a black hole. The book takes special care to have students calculate that “spaghettification”, the process by which the tidal forces of a black hole stretch infalling observers into spaghetti, is surprisingly completely painless: the amount of time you experience it is always less than the amount of time it takes light (and thus also pain) to go from your feet to your head, for any (sufficiently calm) black hole.

Why might Taylor and Wheeler want people of the future to jump into black holes? As the discussion on page B-3 of the book describes, the reason is on one level an epistemic one. As theorists, we’d like to reason about what lies inside the event horizon of black holes, but we face a problem: any direct test would be trapped inside, and we would never know the result, which some would argue makes such speculation unscientific. What Taylor and Wheeler point out is that it’s not quite true that no-one would know the results of such a test: if someone jumped into a black hole, they would be able to test our reasoning. If a whole scientific community jumped in, then the question of what is inside a black hole is from their perspective completely scientific.

Of course, I don’t think Taylor and Wheeler seriously thought their book would convince its readers to jump into black holes. For one, it’s unlikely anyone reading the book will get a chance. Still, I suspect that the idea that future generations might explore black holes gave Taylor and Wheeler some satisfaction, and a nice clean refutation of those who think physics inside the horizon is unscientific. Seeing as the result was an excellent textbook full of hilarious prose, I can’t complain.

More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.


You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.


My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:


The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.