Tag Archives: astrophysics

When to Trust the Contrarians

One of my colleagues at the NBI had an unusual experience: one of his papers took a full year to get through peer review. This happens often in math, where reviewers will diligently check proofs for errors, but it’s quite rare in physics: usually the path from writing to publication is much shorter. Then again, the delays shouldn’t have been too surprising for him, given what he was arguing.

My colleague Mohamed Rameez, along with Jacques Colin, Roya Mohayaee, and Subir Sarkar, wants to argue against one of the most famous astronomical discoveries of the last few decades: that the expansion of our universe is accelerating, and thus that an unknown “dark energy” fills the universe. They argue that one of the key pieces of evidence used to prove acceleration is mistaken: that a large region of the universe around us is in fact “flowing” in one direction, and that tricked astronomers into thinking its expansion was accelerating. You might remember a paper making a related argument back in 2016. I didn’t like the media reaction to that paper, and my post triggered a response by the authors, one of whom (Sarkar) is on this paper as well.

I’m not an astronomer or an astrophysicist. I’m not qualified to comment on their argument, and I won’t. I’d still like to know whether they’re right, though. And that means figuring out which experts to trust.

Pick anything we know in physics, and you’ll find at least one person who disagrees. I don’t mean a crackpot, though they exist too. I mean an actual expert who is convinced the rest of the field is wrong. A contrarian, if you will.

I used to be very unsympathetic to these people. I was convinced that the big results of a field are rarely wrong, because of how much is built off of them. I thought that even if a field was using dodgy methods or sloppy reasoning, the big results are used in so many different situations that if they were wrong they would have to be noticed. I’d argue that if you want to overturn one of these big claims you have to disprove not just the result itself, but every other success the field has ever made.

I still believe that, somewhat. But there are a lot of contrarians here at the Niels Bohr Institute. And I’ve started to appreciate what drives them.

The thing is, no scientific result is ever as clean as it ought to be. Everything we do is jury-rigged. We’re almost never experts in everything we’re trying to do, so we often don’t know the best method. Instead, we approximate and guess, we find rough shortcuts and don’t check if they make sense. This can take us far sometimes, sure…but it can also backfire spectacularly.

The contrarians I’ve known got their inspiration from one of those backfires. They saw a result, a respected mainstream result, and they found a glaring screw-up. Maybe it was an approximation that didn’t make any sense, or a statistical measure that was totally inappropriate. Whatever it was, it got them to dig deeper, and suddenly they saw screw-ups all over the place. When they pointed out these problems, at best the people they accused didn’t understand. At worst they got offended. Instead of cooperation, the contrarians are told they can’t possibly know what they’re talking about, and ignored. Eventually, they conclude the entire sub-field is broken.

Are they right?

Not always. They can’t be, for every claim you can find a contrarian, believing them all would be a contradiction.

But sometimes?

Often, they’re right about the screw-ups. They’re right that there’s a cleaner, more proper way to do that calculation, a statistical measure more suited to the problem. And often, doing things right raises subtleties, means that the big important result everyone believed looks a bit less impressive.

Still, that’s not the same as ruling out the result entirely. And despite all the screw-ups, the main result is still often correct. Often, it’s justified not by the original, screwed-up argument, but by newer evidence from a different direction. Often, the sub-field has grown to a point that the original screwed-up argument doesn’t really matter anymore.

Often, but again, not always.

I still don’t know whether to trust the contrarians. I still lean towards expecting fields to sort themselves out, to thinking that error alone can’t sustain long-term research. But I’m keeping a more open mind now. I’m waiting to see how far the contrarians go.

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!

The Physics Isn’t New, We Are

Last week, I mentioned the announcement from the IceCube, Fermi-LAT, and MAGIC collaborations of high-energy neutrinos and gamma rays detected from the same source, the blazar TXS 0506+056. Blazars are sources of gamma rays, thought to be enormous spinning black holes that act like particle colliders vastly more powerful than the LHC. This one, near Orion’s elbow, is “aimed” roughly at Earth, allowing us to detect the light and particles it emits. On September 22, a neutrino with energy around 300 TeV was detected by IceCube (a kilometer-wide block of Antarctic ice stuffed with detectors), coming from the direction of TXS 0506+056. Soon after, the satellite Fermi-LAT and ground-based telescope MAGIC were able to confirm that the blazar TXS 0506+056 was flaring at the time. The IceCube team then looked back, and found more neutrinos coming from the same source in earlier years. There are still lingering questions (Why didn’t they see this kind of behavior from other, closer blazars?) but it’s still a nice development in the emerging field of “multi-messenger” astronomy.

It also got me thinking about a conversation I had a while back, before one of Perimeter’s Public Lectures. An elderly fellow was worried about the LHC. He wondered if putting all of that energy in the same place, again and again, might do something unprecedented: weaken the fabric of space and time, perhaps, until it breaks? He acknowledged this didn’t make physical sense, but what if we’re wrong about the physics? Do we really want to take that risk?

At the time, I made the same point that gets made to counter fears of the LHC creating a black hole: that the energy of the LHC is less than the energy of cosmic rays, particles from space that collide with our atmosphere on a regular basis. If there was any danger, it would have already happened. Now, knowing about blazars, I can make a similar point: there are “galactic colliders” with energies so much higher than any machine we can build that there’s no chance we could screw things up on that kind of scale: if we could, they already would have.

This connects to a broader point, about how to frame particle physics. Each time we build an experiment, we’re replicating something that’s happened before. Our technology simply isn’t powerful enough to do something truly unprecedented in the universe: we’re not even close! Instead, the point of an experiment is to reproduce something where we can see it. It’s not the physics itself, but our involvement in it, our understanding of it, that’s genuinely new.

The IceCube experiment itself is a great example of this: throughout Antarctica, neutrinos collide with ice. The only difference is that in IceCube’s ice, we can see them do it. More broadly, I have to wonder how much this is behind the “unreasonable effectiveness of mathematics”: if mathematics is just the most precise way humans have to communicate with each other, then of course it will be effective in physics, since the goal of physics is to communicate the nature of the world to humans!

There may well come a day when we’re really able to do something truly unprecedented, that has never been done before in the history of the universe. Until then, we’re playing catch-up, taking laws the universe has tested extensively and making them legible, getting humanity that much closer to understanding physics that, somewhere out there, already exists.

A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.

gw170817_factsheet

They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.

Congratulations to Rainer Weiss, Barry Barish, and Kip Thorne!

The Nobel Prize in Physics was announced this week, awarded to Rainer Weiss, Kip Thorne, and Barry Barish for their work on LIGO, the gravitational wave detector.

Nobel2017

Many expected the Nobel to go to LIGO last year, but the Nobel committee waited. At the time, it was expected the prize would be awarded to Rainer Weiss, Kip Thorne, and Ronald Drever, the three founders of the LIGO project, but there were advocates for Barry Barish was well. Traditionally, the Nobel is awarded to at most three people, so the argument got fairly heated, with opponents arguing Barish was “just an administrator” and advocates pointing out that he was “just the administrator without whom the project would have been cancelled in the 90’s”.

All of this ended up being irrelevant when Drever died last March. The Nobel isn’t awarded posthumously, so the list of obvious candidates (or at least obvious candidates who worked on LIGO) was down to three, which simplified thing considerably for the committee.

LIGO’s work is impressive and clearly Nobel-worthy, but I would be remiss if I didn’t mention that there is some controversy around it. In June, several of my current colleagues at the Niels Bohr Institute uploaded a paper arguing that if you subtract the gravitational wave signal that LIGO claims to have found then the remaining data, the “noise”, is still correlated between LIGO’s two detectors, which it shouldn’t be if it were actually just noise. LIGO hasn’t released an official response yet, but a LIGO postdoc responded with a guest post on Sean Carroll’s blog, and the team at NBI had responses of their own.

I’d usually be fairly skeptical of this kind of argument: it’s easy for an outsider looking at the data from a big experiment like this to miss important technical details that make the collaboration’s analysis work. That said, having seen some conversations between these folks, I’m a bit more sympathetic. LIGO hadn’t been communicating very clearly initially, and it led to a lot of unnecessary confusion on both sides.

One thing that I don’t think has been emphasized enough is that there are two claims LIGO is making: that they detected gravitational waves, and that they detected gravitational waves from black holes of specific masses at a specific distance. The former claim could be supported by the existence of correlated events between the detectors, without many assumptions as to what the signals should look like. The team at NBI seem to have found a correlation of that sort, but I don’t know if they still think the argument in that paper holds given what they’ve said elsewhere.

The second claim, that the waves were from a collision of black holes with specific masses, requires more work. LIGO compares the signal to various models, or “templates”, of black hole events, trying to find one that matches well. This is what the group at NBI subtracts to get the noise contribution. There’s a lot of potential for error in this sort of template-matching. If two templates are quite similar, it may be that the experiment can’t tell the difference between them. At the same time, the individual template predictions have their own sources of uncertainty, coming from numerical simulations and “loops” in particle physics-style calculations. I haven’t yet found a clear explanation from LIGO of how they take these various sources of error into account. It could well be that even if they definitely saw gravitational waves, they don’t actually have clear evidence for the specific black hole masses they claim to have seen.

I’m sure we’ll hear more about this in the coming months, as both groups continue to talk through their disagreement. Hopefully we’ll get a clearer picture of what’s going on. In the meantime, though, Weiss, Barish, and Thorne have accomplished something impressive regardless, and should enjoy their Nobel.

Simple Rules Don’t Mean a Simple Universe

It’s always fun when nature surprises you.

This week, the Perimeter Colloquium was given by Laura Nuttall, a member of the LIGO collaboration.

In a physics department, the colloquium is a regularly scheduled talk that’s supposed to be of interest to the entire department. Some are better at this than others, but this one was pretty fun. The talk explored the sorts of questions gravitational wave telescopes like LIGO can answer about the world.

At one point during the talk, Nuttall showed a plot of what happens when a star collapses into a supernova. For a range of masses, the supernova leaves behind a neutron star (shown on the plot in purple). For heavier stars, it instead results in a black hole, a big black region of the plot.

What surprised me was that inside the black region, there was an unexpected blob: a band of white in the middle of the black holes. Heavier than that band, the star forms a black hole. Lighter, it also forms a black hole. But inside?

Nothing. The star leaves nothing behind. It just explodes.

The physical laws that govern collapsing stars might not be simple, but at least they sound straightforward. Stars are constantly collapsing under their own weight, held up only by the raging heat of nuclear fire. If that heat isn’t strong enough, the star collapses, and other forces take over, so the star becomes a white dwarf, or a neutron star. And if none of those forces is strong enough, the star collapses completely, forming a black hole.

Too small, neutron star. Big enough, black hole. It seems obvious. But reality makes things more complicated.

It turns out, if a star is both massive and has comparatively little metal in it, the core of the star can get very very hot. That heat powers an explosion more powerful than a typical star, one that tears the star apart completely, leaving nothing behind that could form a black hole. Lighter stars don’t get as hot, so they can still form black holes, and heavier stars are so heavy they form black holes anyway. But for those specific stars, in the middle, nothing gets left behind.

This isn’t due to mysterious unknown physics. It’s actually been understood for quite some time. It’s a consequence of those seemingly straightforward laws, one that isn’t at all obvious until you do the work and run the simulations and observe the universe and figure it out.

Just because our world is governed by simple laws, doesn’t mean the universe itself is simple. Give it a little room (and several stars’ worth of hydrogen) and it can still surprise you.