Tag Archives: LIGO

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

Breakthrough Prize for Supergravity

This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award.

The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus.

The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.)

It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules.

In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment.

Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture.

In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques.

The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!

Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

Amplitudes 2018

This week, I’m at Amplitudes, my field’s big yearly conference. The conference is at SLAC National Accelerator Laboratory this year, a familiar and lovely place.


Welcome to the Guest House California

It’s been a packed conference, with a lot of interesting talks. Recording and slides of most of them should be up at this point, for those following at home. I’ll comment on a few that caught my attention, I might do a more in-depth post later.

The first morning was dedicated to gravitational waves. At the QCD Meets Gravity conference last December I noted that amplitudes folks were very eager to do something relevant to LIGO, but that it was still a bit unclear how we could contribute (aside from Pierpaolo Mastrolia, who had already figured it out). The following six months appear to have cleared things up considerably, and Clifford Cheung and Donal O’Connel’s talks laid out quite concrete directions for this kind of research.

I’d seen Erik Panzer talk about the Hepp bound two weeks ago at Les Houches, but that was for a much more mathematically-inclined audience. It’s been interesting seeing people here start to see the implications: a simple method to classify and estimate (within 1%!) Feynman integrals could be a real game-changer.

Brenda Penante’s talk made me rethink a slogan I like to quote, that N=4 super Yang-Mills is the “most transcendental” part of QCD. While this is true in some cases, in many ways it’s actually least true for amplitudes, with quite a few counterexamples. For other quantities (like the form factors that were the subject of her talk) it’s true more often, and it’s still unclear when we should expect it to hold, or why.

Nima Arkani-Hamed has a reputation for talks that end up much longer than scheduled. Lately, it seems to be due to the sheer number of projects he’s working on. He had to rush at the end of his talk, which would have been about cosmological polytopes. I’ll have to ask his collaborator Paolo Benincasa for an update when I get back to Copenhagen.

Tuesday afternoon was a series of talks on the “NNLO frontier”, two-loop calculations that form the state of the art for realistic collider physics predictions. These talks brought home to me that the LHC really does need two-loop precision, and that the methods to get it are still pretty cumbersome. For those of us off in the airy land of six-loop N=4 super Yang-Mills, this is the challenge: can we make what these people do simpler?

Wednesday cleared up a few things for me, from what kinds of things you can write down in “fishnet theory” to how broad Ashoke Sen’s soft theorem is, to how fast John Joseph Carrasco could show his villanelle slide. It also gave me a clearer idea of just what simplifications are available for pushing to higher loops in supergravity.

Wednesday was also the poster session. It keeps being amazing how fast the field is growing, the sheer number of new faces was quite inspiring. One of those new faces pointed me to a paper I had missed, suggesting that elliptic integrals could end up trickier than most of us had thought.

Thursday featured two talks by people who work on the Conformal Bootstrap, one of our subfield’s closest relatives. (We’re both “bootstrappers” in some sense.) The talks were interesting, but there wasn’t a lot of engagement from the audience, so if the intent was to make a bridge between the subfields I’m not sure it panned out. Overall, I think we’re mostly just united by how we feel about Simon Caron-Huot, who David Simmons-Duffin described as “awesome and mysterious”. We also had an update on attempts to extend the Pentagon OPE to ABJM, a three-dimensional analogue of N=4 super Yang-Mills.

I’m looking forward to Friday’s talks, promising elliptic functions among other interesting problems.

At the GGI Lectures on the Theory of Fundamental Interactions

I’m at the Galileo Galilei Institute for Theoretical Physics in Florence at their winter school, the GGI Lectures on the Theory of Fundamental Interactions. Next week I’ll be helping Lance Dixon teach Amplitudeology, this week, I’m catching the tail end of Ira Rothstein’s lectures.


The Galileo Galilei Institute, at the end of a long, winding road filled with small, speedy cars and motorcycles, in classic Italian fashion

Rothstein has been heavily involved in doing gravitational wave calculations using tools from quantum field theory, something that has recently captured a lot of interest from amplitudes people. Specifically, he uses Effective Field Theory, theories that are “effectively” true at some scale but hide away higher-energy physics. In the case of gravitational waves, these theories are a powerful way to calculate the waves that LIGO and VIRGO can observe without using the full machinery of general relativity.

After seeing Rothstein’s lectures, I’m reminded of something he pointed out at the QCD Meets Gravity conference in December. He emphasized then that even if amplitudes people get very good at drawing diagrams for classical general relativity, that won’t be the whole story: there’s a series of corrections needed to “match” between the quantities LIGO is able to see and the ones we’re able to calculate. Different methods incorporate these corrections in different ways, and the most intuitive approach for us amplitudes folks may still end up cumbersome once all the corrections are included. In typical amplitudes fashion, this just makes me wonder if there’s a shortcut: some way to compute, not just a piece that gets plugged in to an Effective Field Theory story, but the waves LIGO sees in one fell swoop (or at least, the part where gravity is weak enough that our methods are still useful). That’s probably a bit naive of me, though.

4gravitons Meets QCD Meets Gravity

I’m at UCLA this week, for the workshop QCD Meets Gravity. I haven’t worked on QCD or gravity yet, so I’m mostly here as an interested observer, and as an excuse to enjoy Los Angeles in December.


I think there’s a song about this…

QCD Meets Gravity is a conference centered around the various ways that “gravity is Yang-Mills squared”. There are a number of tricks that let you “square” calculations in Yang-Mills theories (a type of theory that includes QCD) to get calculations in gravity, and this conference showcased most of them.

At Amplitudes this summer, I was disappointed there were so few surprises. QCD Meets Gravity was different, with several talks on new or preliminary results, including one by Julio Parra-Martinez where the paper went up in the last few minutes of the talk! Yu-tin Huang talked about his (still-unpublished) work with Nima Arkani-Hamed on “UV/IR Polytopes”. The story there is a bit like the conformal bootstrap, with constraints (in this case based on positivity) marking off a space of “allowed” theories. String theory, interestingly, is quite close to the boundary of what is allowed. Enrico Herrmann is working on a way to figure out which gravity integrands are going to diverge without actually integrating them, while Simon Caron-Huot, in his characteristic out-of-the-box style, is wondering whether supersymmetric black holes precess. We also heard a bit more about a few recent papers. Oliver Schlotterer’s talk cleared up one thing: apparently the GEF functions he defines in his paper on one-loop “Z theory” are pronounced “Jeff”. I kept waiting for him to announce “Jeff theory”, but unfortunately no such luck. Sebastian Mizera’s talk was a very clear explanation of intersection theory, the subject of his recent paper. As it turns out, intersection theory is the study of mathematical objects like the Beta function (which shows up extensively in string theory), taking them apart in a way very reminiscent of the “squaring” story of Yang-Mills and gravity.

The heart of the workshop this year was gravitational waves. Since LIGO started running, amplitudes researchers (including, briefly, me) have been looking for ways to get involved. This conference’s goal was to bring together amplitudes people and the gravitational wave community, to get a clearer idea of what we can contribute. Between talks and discussions, I feel like we all understand the problem better. Some things that the amplitudes community thought were required, like breaking the symmetries of special relativity, turn out to be accidents of how the gravitational wave community calculates things: approximations that made things easier for them, but make things harder for us. There are areas in which we can make progress quite soon, even areas in which amplitudes people have already made progress. The detectors for which the new predictions matter might still be in the future (LIGO can measure two or three “loops”, LISA will see up to four), but they will eventually be measured. Amplitudes and gravitational wave physics could turn out to be a very fruitful partnership.


A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.


They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.